-
Notifications
You must be signed in to change notification settings - Fork 739
Full query result being loaded into RAM despite ON_DEMAND_RECORD_COUNT=1000 #5797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'm thinking the issue may be related to the decision not to use server-side cursors here: #947 My understanding is that both sync and async queries load the whole result into RAM. The only difference is that async queries don't block when they start doing that. So async queries give you the increased responsiveness, but not reduced RAM usage. |
Also wanted to add: while testing I was noticing memory leaking. I'd run a large query then a small query, but the RAM usage of the large query would stick around, even after closing the query tool. But it wasn't super consistent |
Hi @mac-chaffee, |
@adityatoshniwal I ran |
If running pgAdmin in the official container, just I also think I know why I saw memory-leaky issues. Without server-side cursors, every time you run a query, the full result is loaded into RAM, just in the background. This is controlled by StartRunningQuery. But since the transaction object is designed to out-live the request cycle, running a large query means you have a large transaction objects still occupying RAM even if the user has closed the query tool. So server-side cursors should fix that issue as well. (Running a different query after running a large query does seem to free up the RAM usage of the large query, so I take back what I said earlier. I only see memory leak when closing the query tool.) |
@mac-chaffee From what I can see, server side cursors are already used. The RAM usage must be because of UI storing the rows. Need to dig more. |
Here's a screenshot of the devtools after I run a large query (~6 million rows, ~2GB) with Note that the initial request (
After that first request returns, the content is fetched via At this point, Server cursors are supported here:
But that function is never called with pgadmin4/web/pgadmin/utils/driver/psycopg2/connection.py Lines 1024 to 1040 in cf94860
Note the drawbacks mentioned in the other ticket I linked above, not sure if this is still applicable or not: https://www.postgresql.org/message-id/flat/CAFiP3vxaM9ixpmaM04JAkTT1ONwmnMPwUCbK%2BkzCN%2Br6F%2B7-ZA%40mail.gmail.com#CAFiP3vxaM9ixpmaM04JAkTT1ONwmnMPwUCbK+kzCN+r6F+7-ZA@mail.gmail.com |
I tried just setting
Seems you can't mix async queries and server-side cursors together. EDIT: If you also set |
It may also be good to take inspiration from psql itself. It pretty much always fetches the whole result synchronously except for when |
…arge volumes of data. pgadmin-org#5797
Uh oh!
There was an error while loading. Please reload this page.
Describe the bug
When running something like
select * from bigtable
on a table with many rows, RAM usage of pgAdmin baloons to the size of the table size. This happens on the "query tool", which is supposed to fetch and display only 1000 rows at a time, which makes me think this is a bug.To Reproduce
Steps to reproduce the behavior:
select * from bigtable
Observe that RAM usage spiked about 2GB in size while the query was executing, then dropped back down.
Expected behavior
RAM usage should remain fairly low, about the size it would take to hold 1000 rows (give or take). You can see this expected behavior by doing the same query in plain
psql
after running\set FETCH_COUNT 1000
.Error message
If RAM usage was large enough to invoke the OOMKiller, user will see an error like "Error communicating with the pgAdmin server process".
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: