Published on: 2019-10-11T15:44:54+00:00
The conversation discusses the limitations of BIP 157 compared to BIP 37 in terms of applying filters to mempool and checking zero confirmation transactions. The light client can only process unconfirmed transactions by loading the entire mempool transaction flow, which raises concerns about DOS and fingerprinting. To address this, a possible solution is suggested, which involves retrieving a complete compact filter of the entire mempool using a maximum size ordered by feerate. This filter could be dynamically constructed and updated based on time intervals rather than every new transaction in the mempool. The process includes generating a mempool filter, connecting to the peer, requesting the filter, processing it for relevant transactions, and sending getdata for those transactions. After the initial retrieve, the light client inspects all new transactions received from peers (filterless unconfirmed tx detection). Alternatively, the mempool filter can be requested again after a timeout if the previous process consumes too much bandwidth. The goal is to make the light client process more efficient without compromising privacy and security.The Block Batch Filters draft has been improved, unlocking the ability to filter unconfirmed transactions for SPV nodes using BIP 157 instead of BIP 37, which had privacy leaks. This allows light clients that refused to use BIP 37 to process unconfirmed transactions by loading the entire mempool transaction flow. Additionally, the Mempool Transaction Filters draft proposes a future consensus layer soft-fork to incorporate block filters commitment into block validation rules, protecting light nodes from payment hiding attacks. This method also reduces bandwidth consumption compared to block filters and downloading full blocks for affected addresses.The updated version of the draft for BIP block batch filters can be found on the Bitaps Github page. It includes changes such as a return to Golomb coding and a simpler, more effective schema implementation. The total filter size is smaller than BIP 158, with estimated savings of over 20%. The filter is deterministic and could potentially be committed as a commitment in coinbase transactions in the future. The GCS parameters are flexible to maintain necessary FPS, and the filter has been split into two parts - unique elements and duplicated elements - with the latter encoded more effectively. However, there are still questions about the optimal range for batch, the need for SIP hash instead of using the first 64 bits from pub key/script hash, and whether unique/duplicated elements should be downloaded separately or if filter types should be added for these purposes. Feedback and discussions on these topics are welcome.Aleksey Karpov also shared a draft of a BIP for compact probabilistic block filters as an alternative to BIP 158. While BIP 158 has a low false positive rate, a higher false positive rate filter could achieve lower bandwidth while syncing the blockchain. However, BIP 158 does not support filter batching due to the design of its parameters for siphash and Golomb coding. The proposed alternative uses delta coding and splits data into two bit string sequences, compressing the second sequence with two rounds of the Huffman algorithm. This method has an effectiveness rate of about 98% compared to Golomb with optimal parameters. Batching block filters significantly reduces their size, and separating filters by address type allows lite clients to avoid downloading redundant information without compromising privacy. The recommended strategy for lite client filters download is to get the biggest filter (smallest blocks/size rate) for a blocks range, then, in case of a positive test, get medium filters to reduce the blocks range, followed by getting block filters for the affected range and downloading affected blocks over TOR. An implementation in Python can be found on GitHub.Tamas Blummer also shared his thoughts on the use of filters in deciding whether to download newly announced blocks. He believes that whole chain scans would be better served with plain sequential reads in map-reduce style. Many clients do not care about filters for blocks before the birth date of their wallet's keys, so they skip over the majority of history, resulting in greater savings than any aggregate filter. Blummer wishes for a filter commitment as it could unlock more utility than marginal savings through a more elaborate design.In conclusion, there are ongoing developments and discussions regarding the limitations of BIP 157, the improvements to the Block Batch Filters draft, and the proposed alternative compact probabilistic block filters. The aim is to make the light client process more efficient while ensuring privacy and security. Feedback and input on these matters are encouraged.
Updated on: 2023-08-02T01:21:33.512882+00:00