Author: Cameron Garnham 2015-05-08 03:12:22
Published on: 2015-05-08T03:12:22+00:00
A member of the Bitcoin community recently suggested a different approach to pre-hard-fork soft-forks. The proposal involves setting a 'block size cap' in a similar way as the difficulty is set. Every 2016 blocks, the average size of the blocks would be taken and multiplied by 1.5x. Blocks larger than this would be rejected for the next 2016 period. The limits would initially be set at min 100kb and max 990kb (not 1mb on purpose). This rule would stop a rogue miner from creating large blocks if the block-size limit is removed. The proposal could be implemented by miners without breaking any clients and could produce a better dynamic fee pressure.Pieter Wuille responded to the proposal, stating that he wouldn't modify his node even if 99% of miners went along with it. If the fork preferred by Roy Badami had only 1% of the hash power, it would be vulnerable not just to a 51% attack but to a 501% attack, making Bitcoin dead. However, before anyone (miners or others) would think about trying such a change, they would need to convince people and be sure they will effectively modify their code.The email also contains an advertisement for One dashboard for servers and applications across Physical-Virtual-Cloud, boasting the widest out-of-the-box monitoring support with 50+ applications, performance metrics, stats, and reports that give actionable insights, and deep dive visibility with transaction tracing using APM Insight.
Updated on: 2023-06-09T19:52:49.959194+00:00