comments on BIP 100



Summary:

The issue with Bitcoin scalability is not the linear (or quasilinear) scaling of per block validation, or the O(n^2) bandwidth issue; it is the cost of validation. The n in O(n) represents the size of the entire blockchain for every new validator that must be synchronized, meaning that short proofs cannot be constructed without requiring the validator to maintain the complete system state. Currently, there is no mechanism for directly compensating validators. A full validator that goes offline takes a while to catch up to the rest of the network, and starting up a new validator from scratch will continue to be painful, even for experienced users. Satoshi's SPV is not a real solution as it lacks a good validation security model, weakens privacy, and complicates client implementations substantially. Despite this, it still doesn't scale all that well as the validator still has to query every single block back to the first transaction. Algorithmic improvements are needed, but a hard-fork takes a long period of time to deploy due to the non-upgrade risk.Adam Back suggests that much of the industry is built on web2.0 technology using Bitcoin via a library in a web scripting language, often with consensus bugs, and outsourcing and not running their own full node, so that the service itself offered to their users isn't even SPV secure to the operator. He believes that if we're technologists, we should be trying to move the technology forward, not just changing parameters which run into an O(n^2) scaling wall or break decentralisation security. Computational complexity for the entire network is O(nm) where n=transactions and m=fully validating nodes, but global bandwidth is O(n^2) with n=users, or O(n) per user bandwidth cost to the system. Adam argues that raising the block size limit is not enough, and that we need to focus on the big picture of building incrementally deployable algorithmic improvements.


Updated on: 2023-06-09T23:05:16.002584+00:00