Published on: 2017-04-11T10:04:01+00:00
In a recent email discussion on the bitcoin-dev mailing list, Tom Harding suggested that maintaining a transaction index in a network with multiple nodes could enable light node applications to verify the existence and spentness of transaction outputs (TXOs). However, Gregory Maxwell pointed out that additional commitment structures, such as those proposed by Peter Todd in his stxo/txo commitment designs, would be necessary for this to work effectively. These commitment structures would improve light nodes by detecting invalid transactions before they are mined.Tomas, a developer, stressed the importance of protocol/implementation separation in response to a previous comment. He explained that while UTXO data is always a resource cost for script validation, the ratio of different costs can vary across implementations. In Bitcrust, if the last few blocks contain many inputs, the peak load verification for these blocks is slower compared to Core. Another user suggested limiting the number of 1-in-100-out transactions to prevent UTXO growth and improve efficiency. They also mentioned experimenting with regtest to compare the performance of Bitcrust with Core and asked about the minimum disk and memory usage in Bitcrust compared to Core's pruning mode.The conversation between Gregory Maxwell and Tom Harding focused on transaction indexes and light nodes. Harding suggested that maintaining a transaction index in a network of nodes would allow light node applications to request peers to prove the existence and usage of TXOs. Maxwell added that this would require additional commitment structures proposed by Peter Todd. The email exchange also included a link to Todd's proposal for delayed txo commitments.The discussion between Gregory Maxwell and Tomas revolved around the resource costs and latency-related costs in Bitcoin Core. Tomas noted that Bitcrust has slower peak load verification if the last few blocks contain many inputs, unlike Core. Tomas argued that the minimal disk storage required in Bitcoin cannot be less than all the unspent outputs. He mentioned that improvements like improving locality or keeping spentness in memory do not change the fact that UTXO data remains a significant long-term resource cost. However, during peak load block validation with pre-synced transactions, accessing storage can be minimized. The conversation also touched on the misaligned incentives and the impact of small outputs on long-term costs.The bitcoin-dev mailing list discussion debated whether application-layer caching is necessary or if allowing the operating system to use disk caching or memory map caching would be more effective. It was mentioned that an explicit cache is beneficial on lower memory systems but actually a de-optimization on high RAM systems. Eric Voskuil suggested that reducing the need for paging can be achieved through caching, while another participant argued that sorting data based on frequency of use would beat any application-layer cache. They suggested leaving caching decisions to the operating system based on spatial and temporal locality of reference.The discussion on optimization for lower memory platforms focused on reducing the need for paging through caching. Bram Cohen suggested maximizing memory usage and minimizing disk access, but others disagreed, stating that an application-layer cache only makes sense with a clear distinction between often-used and not often-used data. They emphasized the importance of proper spatial and temporal locality of reference and letting the operating system make caching decisions.The discussion on the long-term resource requirements of proposals with regards to unspent output data questioned whether cramming as much as possible into memory and minimizing disk access is the best approach. Bram Cohen asked if this optimization overlooks considerations such as startup time, warm-up time, shutdown time, fault tolerance, etc. The discussion highlighted the trade-offs and complexities involved in managing these considerations.Tomas introduced Bitcrust, a new Bitcoin implementation that uses a different approach to indexing for verifying the order of transactions. Instead of using an index of unspent outputs, double spends are verified using a spend-tree. Bitcrust has shown excellent performance characteristics, particularly in peak load order validation. The discussion also mentioned the possibility of integrating Bitcrust into Bitcoin Core as a selectable feature.Tomas proposed a solution to address the issue of UTXO growth in the Bitcoin protocol by reversing the costs of outputs and inputs. However, there are concerns about the long-term resource requirements and whether this protocol improvement is worth considering.Overall, the discussions on the bitcoin-dev mailing list covered various topics related to transaction indexes, resource costs, caching, and optimization for lower memory platforms. The conversations provided insights into different perspectives and considerations in Bitcoin development.In another conversation, Tomas explained how Bitcrust separates script validation from order validation and deals with changing validity rules based on block height. Script validation requires a (U)TXO database that consumes around 2GB of outputs. Order validation, on the other hand, requires a spent-index (~200MB) and a spent-tree (~500MB). Tomas mentioned that pruning the 5.7GB full spend tree is not worth it at the moment.
Updated on: 2023-08-01T20:20:45.215731+00:00