-
Notifications
You must be signed in to change notification settings - Fork 2.2k
[bug]: listpayments hangs node and eats RAM when lots of payments in the DB #9709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Here's another reason to limit
I'm not sure if this limit is because of |
I would expect to have the same kind of issue with |
Performance is even worse for |
Also mildly related: #9729 . Generally, I think we should not recommend postgres for large nodes right now until #9147 is complete, which should mitigate this issue (but not solve it, particularly items 1, 2, and 3 of #9709 (comment)). If this issue is not fixed and #9147 is complete we may no longer crash our node from RAM exhaustion (I have always killed the node before it crashed from RAM exhaustion) but we will still hit #9709 (comment) . Completing #9147 makes things more scalable, but the same issues defined above will still be hit once a certain scale is reached. |
Your environment
--lnd.db.backend=sqlite --lnd.db.use-native-sql
Steps to reproduce
I have a big database
If I constrain with
--max_payments 2
, I can find that I have 5,835,721 payments in the DB. This takes less than 7 seconds.However, if I try to run
That churns forever at around 77% CPU and 26MB/s disk read and keeps growing RAM usage. If I Control+C the above command, the node keeps churning the query (I assume). I assume the command will eventually finish if the system doesn't run out of RAM, but I haven't waited to find out.
Expected behaviour
--max_payments
. This should avoid accidentally making such queries that take forever and use all system RAM and even if there is enough RAM to complete the query, it will avoid blasting a bunch of data back to the client (above I piped to/dev/null
as a precaution).The text was updated successfully, but these errors were encountered: