Author: Olaoluwa Osuntokun 2018-06-08 23:35:29
Published on: 2018-06-08T23:35:29+00:00
In a discussion on the Bitcoin-dev mailing list, Olaoluwa Osuntokun argued against adopting the inferior version of Bitcoin's filter construction technique. This proposal included prev scripts instead of the outpoint as the "regular" filter does now. Osuntokun suggested that such a proposal would need to be generalized enough to allow several components to be committed, likely have versioning and also provide the necessary extensibility to allow additional items to be committed in the future. Gregory Maxwell disagreed with Osuntokun's argument, stating that this proposal would contribute more momentum to doing it in a way that doesn't make sense long term. He argued that maintaining the outpoint allows us to rely on a "single honest peer" security model in the short term. However, Osuntokun countered that we should not seek to repeat history by purposefully implementing as little validation as possible. The argument against moving to previous scripts is that clients cannot verify fully unless a block message is added to include the previous outs. This is seen as a downgrade assuming a "one honest peer" model for the peer-to-peer interactions. A commitment would remove this drawback but requires a soft fork which takes a long time to deploy.The cost of using the current filter is discussed in terms of allowing experimentation with the technique on mainnet before committing to it. Depending on how the commitment is done, the filters themselves would need modification. The idea of adding more commitments is mentioned, which may make sense to coordinate an "ultimate" extensible commitment once rather than special case distinct commitments.However, there are issues with committing the current filter format as it indexes each of the coinbase output scripts, creating a circular dependency. It may also be several hundred bytes to present a proof to the client, which may outweigh the gains from taking advantage of the additional compression the previous outs allow. The Tier Nolan proposal to create a "constant" sized proof for future commitments by constraining the size of the block and placing the commitments within the last few transactions in the block is also discussed. The benefit of experimentation ahead of the commitment is pointed out, as well as the fact that delaying until knowing what the commitment will look like could hinder experimentation. The point is raised that while peers can still scan blocks directly when they disagree on the filter content regardless of how the filter is constructed, one option lets you fully construct the filter from a block while the other requires additional data. The question of whether to optimize for the ability to validate in a particular model or lower bandwidth is raised. It is mentioned that the overhead of receiving proofs of the commitment may outweigh the savings depending on block composition. The current state of non-existing validation in SPV software is brought up, and it is argued that it is unfair to compare those who wish to implement this proposal to those who don't.
Updated on: 2023-05-20T08:27:06.661327+00:00