-
Notifications
You must be signed in to change notification settings - Fork 546
Possible memory leak in Celery integration #2034
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey 👋 Thanks for investigating this after I reported some weird memory issues a while back. This definitely looks promising :) I'll try to have another dig this week and see if I can get a clearer reproduction case on our setup. Out of curiosity was this configured for just error reporting, just transaction tracing, or both? |
I had both, error reporting and tracing. (and around 40% of my requests errored). If you evaluate this again please make sure to use the lastes Sentry SDK version, because we released some fixes in the mean time. All in all we did find a memory leak related to profiling, and also decreased memory usage in combination with redis. |
I'm having a similar issue, except it's with the arq integration instead of celery. (Added in #1872). It also seems to prevent some garbage collection for me. Do you think I should create a new issue? Could it be because of the same root cause? This is happening with the latest sentry version, 1.24.0. |
Yes please create a new issue for that @jparkrr . And it would be great if you could include a sample app to reproduce the possible memory leak |
Does this issue still exist, or has it already been addressed in another PR? asking this because this issue was created one day after this comment: #1980 (comment) |
I just now re-run my tests with the newest SDK (1.45.0) and this is what I got: I do not know what happening at that big bumb. (also had to run my test three times without sentry to get a plot, because other times it crashed vscode that I used to call the scripts from.) So maybe the test is not the best. Anyhow: the memory usage when using Sentry is stable and not increasing, so no memory leak. Without Sentry some memory is freed after some time and with sentry the memory consumption is stable over time. This is not necessarily a bad thing. |
Closing because it seems like this is no longer an issue |
@szokeasaurusrex actually I experienced the same behavior on celery integration with SDK 1.45.0, so it may not be fix |
Hi @alchimere, can you please elaborate on what behavior you experienced? Do you have a chart to show the memory usage over time? |
@szokeasaurusrex here are the logs of our celery workers: With With the same sentry-sdk version, tracing and profiling activated: (the line at 1.5GB corresponds to our pods' memory limit) |
Uh oh!
There was an error while loading. Please reload this page.
How do you use Sentry?
Sentry Saas (sentry.io)
Version
1.20.0
Steps to Reproduce
Have a Flask project that has multiple views where one is calling a task (ex called
sometask
) multiple times using the delay method:sometask.delay(some_msg)
. (In my test casesome_msg
was a string with 10,000 random characters.Now load test this project and generate lots of hits to the view (and thus lots of calls to the task)
The memory usage will increase over time:

This is the test project I created:
https://github.com/antonpirker/test-celery-memory-leak
Expected Result
Memory usage stays roughly the same.
Actual Result
Memory usage increasing.
The text was updated successfully, but these errors were encountered: