Why are we bleeding nodes?



Summary:

Mark Friedenbach, a Bitcoin developer, discussed the issue of losing nodes due to resource requirements. He stated that running a full-node on his home DSL connection causes other internet activity to periodically become unresponsive. Friedenbach believes that the current resource requirements are pushing out casual users, which may not account for all lost nodes. However, he noted that this is an implementation issue and not a requirements issue. Friedenbach explained that under the current Bitcoin validity rules, it should be reasonable to run a full contributing node with no more than 30 kb/s inbound and 60 kbit/sec outbound. This means reviving two copies of everything, blocks + transactions, and sending out four copies of everything. As long as the node is sending out more than it's taking in, it's contributing to the network's capacity. He added that not every node needs to be contributing super low latency capacity and suggested throwing in a factor of two for bursting. Friedenbach clarified that the current implementation does not support this, but it's not a requirements issue. The problem arises due to not having headers first and the parallel fetch. Therefore, it's an implementation issue that needs to be addressed.


Updated on: 2023-05-19T18:32:02.260282+00:00