Author: Jonas Schnelli 2017-04-18 07:43:52
Published on: 2017-04-18T07:43:52+00:00
The writer of the email agrees that 100GB of data is a significant barrier to more people running full nodes. They suggest that bootstrapping a full node on a consumer system with default parameters takes days, and that during this period, CPU consumption is the biggest UX blocker. The writer suggests that there should be a way for pruned nodes to partially serve historical blocks, and that there could be a mode where full nodes can signal "I keep the last 1000 blocks". They also suggest that historical blocks could be made available on CDNs or other file storage networks to reduce upstream bandwidth consumption. In response to the proposal, the writer asks if there is a fingerprinting element if peers have to pick a segmentation index and questions whether SPV bloom filter clients can use fragmented blocks to filter transactions.
Updated on: 2023-06-12T00:21:48.714189+00:00