Author: Nathan Cook 2017-06-12 16:27:14
Published on: 2017-06-12T16:27:14+00:00
The discussion in the email thread is regarding the issue with transactions larger than 100kb. The problem is not that there are already-signed transactions out there or that there are good use cases for them. The real problem is that such transactions and use cases could exist, and there is no way of disallowing them without possibly costing someone a lot of money. Reducing the limit slowly does not solve this problem. Nathan Cook proposes to make transactions hard enough to confirm that no one will use them without a very good reason. He suggests that any block containing an outsized transaction should be mined at a greater difficulty - quadratically greater. This means that such a block would be more expensive for the block's miner, not just for the other miners or nodes. Anyone who really needs to use a 400kb transaction can pay a miner to mine it. According to Cook, quadratic hashing isn't risky when it is inherently limited by a corresponding reduction in the rate at which the "bad" blocks can be generated. Nevertheless, there is no actual benefit seen in large, expensive to validate transactions, and so >1MB transactions should remain invalid, as they are today. The email thread also discusses various maximum sizes for transactions, and it is suggested that maybe the limit could be set to 1mb initially, and at a distant future block height (years), automatically drop to 500kb or 100kb. This would give anyone with existing systems or pre-signed transactions several years to adjust to the change. Notification could be done with a non-default parameter that must be changed to continue to use 100kb - they were not informed when that future date hits. There are no real advantages to continuing to support transactions larger than 100kb excepting the need to update legacy use cases/already signed transactions.
Updated on: 2023-06-12T01:17:40.282221+00:00