A validation-cost metric for aggregate limits and fee determination [combined summary]



Individual post summaries: Click here to read the original discussion on the bitcoin-dev mailing list

Published on: 2015-11-05T09:23:44+00:00


Summary:

In this context, the author discusses the problem of measuring CPU and bandwidth costs for validating a block. They propose using a common unit of measurement, which is the "percentage of maximum validation ability for some reference hardware running with a network connection capable of some reference bandwidth." This unit will be used to measure the CPU and bandwidth costs.The author also addresses the issue of measuring UTXO (Unspent Transaction Output) growth, which can be negative or positive. To overcome this challenge, they suggest choosing a reasonable growth rate and expressing the actual UTXO growth as a percentage of the maximum allowed. This percentage is then combined with the CPU and bandwidth percentages to determine if the block is valid.Furthermore, the author considers whether miners or wallets need to solve a bin-packing problem to determine which transactions to include in their blocks and what fees to attach. They propose using a fee-per-validation-cost approach instead of fee-per-kilobyte. However, they note that transactions creating more UTXOs than they spend may incur higher costs.To address these challenges, the author suggests breaking down the cost into three components: UTXO growth, CPU, and bandwidth. Miners can then set reasonable transaction selection policies that keep each of these components under the imposed caps.During the first Scaling Bitcoin workshop in Montreal, a presentation highlighted the issue of "bad blocks" that take too long to validate. The presenter discussed how the system's design parameters assume linear scaling of validation costs with transaction or block size. However, certain types of transactions have quadratic scaling validation costs, creating gaps between average-case and worst-case validation costs.Various solutions, such as Gavin's code, have been proposed to track bytes hashed and enforce separate limits for blocks. However, these solutions still leave gaps between average-case and worst-case validation costs. Additionally, transaction selection and fee determination become more complex.The presenter suggests using a linear function of the different costs involved to determine the "size" of a transaction. Others have suggested using a "net-UTXO" metric, which applies an extra fee for creating unspent outputs. This approach makes small dust outputs significantly more expensive than regular spend outputs. However, it also widens the gap between average and worst-case block validation times.The presenter plans to submit a talk proposal for Scaling Bitcoin on this topic and welcomes feedback from the developer community.


Updated on: 2023-08-01T16:48:19.881401+00:00