Huge wallets make Bitcoin Core unusable (Daemon+CLI & Qt) [combined summary]



Individual post summaries: Click here to read the original discussion on the bitcoin-dev mailing list

Published on: 2022-08-20T15:10:57+00:00


Summary:

A developer named Felipe Micaroni Lalli raised two questions on the bitcoin-dev mailing list regarding the optimization of Bitcoin wallet module. The first question was about optimizing a large wallet without moving funds to a new one. In response, it was suggested to use removeprunedfunds RPC which can remove old transactions and speed up balance calculations and transaction creation. The second question was about improving cache usage by putting the entire wallet in memory. However, it was clarified that the wallet is already entirely in memory. The issue with the wallet module is a known one and changing it takes significant time as it is a large module in Bitcoin Core.A user named Felipe Micaroni Lalli posted on the bitcoin-dev forum about the issue he faced with a 5-year-old production server wallet, which had 2.079.337 transactions and 446.503 generated addresses and was around 273 MB in size. The wallet's performance started to degrade exponentially, with most commands timing out. Increasing the timeout and RPC threads in the config file worsened the situation. Even after moving the wallet.dat to a fast SSD disk and increasing cache size, performance wasn't sufficient. If loaded in Bitcoin-qt, the system became unresponsive, making it difficult to use. Felipe questioned if improvements could be made or if the wallet feature should be removed altogether.To run a large Bitcoin core wallet, just call `RemovePrunedFunds` on old transactions. However, it is tricky as one has to ensure that the transaction is "safe to remove." Transactions become unsafe if they have a wallet utxo that you haven't spent or recently spent. Additionally, if transaction B spends from transaction A, removing transaction B will make the wallet think transaction A is unspent when it is not. Thus, pruning should be done depth-first.Bitcoin Core becomes almost useless for the wallet feature since the standard client is slow to sync, hard to use, and not preferred by end-users who opt for Electrum or Wasabi. It also becomes useless for servers in production, forcing them to use third-party solutions for huge wallets. The only current "solution" for huge wallets is to create a new one and send the funds there from time to time, but it can cause privacy concerns and break monitoring of old addresses.Felipe suggested caching the entire wallet in memory, but some code optimization might also be necessary. He asked whether it was possible to optimize a huge wallet without moving funds to a new one, improving cache usage, reducing wallet size, ignoring old addresses, improving I/O treatment, and CPU usage in the main thread on Bitcoin-Qt to avoid window freezing on big and huge wallets. Felipe proposed an "autoarchivehugewallets=1" feature in the file config to create a new wallet and move funds automatically. Felipe included links to several related issues and tests that he thought could be useful for developers. He also mentioned another possible bug where even after moving funds to a new wallet, the old wallet shows an old balance. Rescanblockchain command takes a long time to fix it.A user with a 5-year-old wallet, containing 2.079.337 transactions, and 446.503 generated addresses, has experienced exponential performance degradation when using the Bitcoin Core client. Most of the commands, such as "getbalance", "walletpassphrase" and "getreceivedbyaddress", timed out, while the CPU was 100% used, making the machine almost unusable. The default configuration of 16 RPC threads and 15 min timeout and some attempt calls per mi did not help. Increasing the timeout and/or the RPC threads in the config file made things worse. Loading the wallet in the "bitcoin-qt" caused everything to become unresponsive, including the system (OS/UI).This is bad because the standard client becomes almost useless for the wallet feature. Wallet Qt is already unpopular among end-users, being slow to first sync, hard to use, and not modern. It becomes useless now also for servers in production, forcing them to use third-party solutions for huge wallets. Currently, the only "solution" for huge wallets is to create a new one and send the funds there from time to time. However, this solution is not elegant and can break old address monitoring, leading to privacy concerns and unifying lots of inputs in a big and expensive transaction. The issue could also potentially become an issue if we have LN nodes that use the Bitcoin Core wallet infrastructure behind to open/close many channels for a long time.The user suggests that if moving the wallet from an HDD to an SSD improved a lot, maybe caching the entire wallet in memory could improve even more, but some code optimization is necessary. The following questions were raised: Can we "optimize" a huge wallet without moving the funds to a new one? Can we improve the cache usage somehow? Is it possible to reduce the wallet size? Can we tell the CLI to ignore old addresses?


Updated on: 2023-08-02T07:19:07.652968+00:00