Skip to content

General performance issues for thick mode #1731

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
deostroll opened this issue Mar 21, 2025 · 7 comments
Open

General performance issues for thick mode #1731

deostroll opened this issue Mar 21, 2025 · 7 comments
Labels

Comments

@deostroll
Copy link

We are finding some general performance issues with thick mode. We understand thin mode is developed to address most of them. But many of our client environments are forced to stay with thick mode. Please work towards reducing these thick mode performance issues.

In general, we note two observations

  1. As we move to higher version from (5.5.0) api response times increase. (i.e throughput decreases).
  2. Memory consumption is comparatively too high.

Performance summary by version

We have tabulated throughput tests from version 5.5.0 to the latest 6.8.0.

Version Total Throughput (req/sec) Create (req/sec) FindById (req/sec) Update (req/sec) Delete (req/sec)
5.5.0 1184.36 403.60 403.93 296.46 297.01
6.7.0 1142.23 387.82 387.45 286.22 286.67
6.7.1 1047.45 349.34 349.14 262.15 262.25
6.8.0 954.48 326.61 326.16 239.01 239.28

PFA the sample expressjs app which is used for performance testing along with jmeter script and instructions on how to run. app.zip

Comparative RSS memory performance

We have compared rss memory performance with postgres javascript client. While it is not a fair comparison, the difference is significant. This is captured in the pdf file attached. The jmeter file shared earlier can be tweaked and a test loop has to be authored. The loop would perform a test load (or bust); for e..g with a concurrency of 10 and 1000 requests. Followed by an idle period of 10 min. The test loop can be repeated for sufficient number of times or for a duration of more than 2 hours. This is how the graphs were composed.

Memory consumption comparison 1.pdf

PS: all these tests were done on rhel linux with instant oracle client 21.8

@deostroll deostroll added the bug label Mar 21, 2025
@cjbj
Copy link
Member

cjbj commented Mar 21, 2025

Thanks for the benchmarking. It is something we should look into.

For security reasons, your app.zip code is not something I want to install. Do you have something which is only a single node-oracledb JS file?

(A side note: to be pedantic, Thin mode was created to ease installation & deployment issues, not for performance reasons.)

@MaxDNG
Copy link

MaxDNG commented Apr 24, 2025

Hi all, I was about to create a similar issue since we've also noticed memory issues using thick mode. In our case, we're seeing the issue in a container which eventually gets OOM killed.

I've created a reproduction repo here: https://github.com/MaxDNG/oracle-memory.

Basically, as noticed by @deostroll we see a consistent increase in RSS. Running the process with valgrind shows malloc not being freed. Now I'm absolutely no C/C++ expert so maybe I'm simply mislead by the logs. If not, there might be some memory leak in instantclient binaries.

PS: if you think this is a different issue, let me know and I'll create a new one.

@sudarshan12s
Copy link

sudarshan12s commented Apr 25, 2025

Hi all, I was about to create a similar issue since we've also noticed memory issues using thick mode. In our case, we're seeing the issue in a container which eventually gets OOM killed.

I've created a reproduction repo here: https://github.com/MaxDNG/oracle-memory.

Basically, as noticed by @deostroll we see a consistent increase in RSS. Running the process with valgrind shows malloc not being freed. Now I'm absolutely no C/C++ expert so maybe I'm simply mislead by the logs. If not, there might be some memory leak in instantclient binaries.

PS: if you think this is a different issue, let me know and I'll create a new one.

Hi @MaxDNG ,
As per the logs attached, I do not see any memory leak information. Does this same program cause OOM when it is run for long time?
The memory fragmentation with gcc malloc could just show up over a long time. As a test, can we replace with jemalloc(Ex: LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1)

on the logs shared:

valgrind.out -> The LEAK SUMMARY: looks fine, it shows around 1 MB reachable before exiting and definitely lost is minimum.
example-run.log -> indicates RSS has grown and decreased as well.

Yes it is better to track in a different issue if you are facing memory leaks but not perf issues reported in this (where memory also reaches stable state with time and requests )

@MaxDNG
Copy link

MaxDNG commented May 6, 2025

Thanks for your answer and sorry for the delay.

So what's you're saying is that it's normal for a process to "lose" RSS? Indeed in decreased but did not go back to the base level. We start with 66MB and end up with 96MB. Is that expected? The problem I have is that every time I re-run the program I lose 30MB. Here it's not so much of a problem but in my real use-case I lose way more on each run, leading the container to be OOM Killed. In the example repo, I ran the insert statement twice, waiting 30 seconds between the two. This is what I had:
initial RSS 66MB
after 1st round: 118MB
20s later: 100MB (so 34MB more than initial)
after 2nd round: 114MB
30s later: still 114MB
So overall I've lost 34+14MB. Is that expected?

From the LEAK SUMMARYY, what I find puzzling is seeing 1,222 + 26,662 bytes lost (plus more potentially lost). I would expect to see it close to 0, meaning all alloc'd memory is free'd.

@sudarshan12s
Copy link

Thanks for your answer and sorry for the delay.

So what's you're saying is that it's normal for a process to "lose" RSS? Indeed in decreased but did not go back to the base level. We start with 66MB and end up with 96MB. Is that expected? The problem I have is that every time I re-run the program I lose 30MB. Here it's not so much of a problem but in my real use-case I lose way more on each run, leading the container to be OOM Killed. In the example repo, I ran the insert statement twice, waiting 30 seconds between the two. This is what I had: initial RSS 66MB after 1st round: 118MB 20s later: 100MB (so 34MB more than initial) after 2nd round: 114MB 30s later: still 114MB So overall I've lost 34+14MB. Is that expected?

From the LEAK SUMMARYY, what I find puzzling is seeing 1,222 + 26,662 bytes lost (plus more potentially lost). I would expect to see it close to 0, meaning all alloc'd memory is free'd.

Does running the program for an extended period cause memory usage to grow proportionally? If memory continues to increase without decreasing — for instance, after running overnight or when simulating larger bind values (which might expose leak symptoms more rapidly) — it could suggest a potential memory leak. If you can share a test case that reproduces this behavior, it would greatly help in debugging and analysis.

In the current case, since the runtime is short (running twice) , the observed memory growth may be due to memory caching, where the system pre-allocates more memory than immediately required.

As per the LEAK SUMMARY, the driver does statically cache a few internal structures to avoid repeatedly recreating them. However, this static memory usage is limited and should not exceed a few megabytes.

@MaxDNG
Copy link

MaxDNG commented May 6, 2025

Thanks @sudarshan12s for the details, much clearer for me now.
I'll try to monitor that and see if I can reproduce the issue on long-running tasks.
When you say the "system pre-allocates more memory" is it instantclient or node itself?

@sudarshan12s
Copy link

Thanks @sudarshan12s for the details, much clearer for me now. I'll try to monitor that and see if I can reproduce the issue on long-running tasks. When you say the "system pre-allocates more memory" is it instantclient or node itself?

Yes the pre-allocated memory could be done from Node.js memory allocators or the instant client which has its own memory management and caching mechanisms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants