Author: Peter Tschipper 2015-11-11 18:35:01
Published on: 2015-11-11T18:35:01+00:00
The latest test results on compression ratios for the first 295,000 blocks shows that serving up compressed blocks is beneficial for those with bandwidth caps on their internet connections. The proposal is to compress blocks using a compression library and advertise compression as a service in case compression or decompression fails or crashes. The compression should be done at the datastream level in the code. The test results show that most blocks and transactions have runs of zeroes and/or highly common bit-patterns, which contributes to useful compression even at smaller sizes. LZO could provide better compression, but it would come at a cost of CPU performance and using a less-reviewed and less-field-tested library. The range for block size ranges from 0-250b to 900KB-1MB, and the average size of uncompressed blocks ranges from 215 to 945747 bytes. The average size of compressed blocks ranges from 189 to 726307 bytes. The compression ratio ranges from 12.40% to 23.20%. The data points taken are over 91280 to 304. Jeff Garzik suggests wrapping further at the stream level, while Tier Nolan suggests defining a "cblocks" message that handles multiple blocks, and similarly combining transactions together and compressed "ctxs." Johnathan Corgan reinforces the idea that trade-off decisions should be local and negotiated between peers.
Updated on: 2023-06-11T00:57:42.262228+00:00