-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Memory usage keeps increasing (Heroku) #3576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Try updating to 2.3.6, that may help, do you have a lot of users/roles? |
I've deployed to Heroku from parse-server-example using mLab's tutorial. I do have a good number of users, but not active all at once. I don't have many roles. What I don't really understand is the difference in memory usage and the fact that it keeps increasing no matter what. I've contacted Heroku support about this memory issue and here is there answer:
Do you think this is due to a memory leak or that these node optimizations are needed for the parse server instance on Heroku? I'm still new to Heroku and I don't know if there is a specific way to configure dynos to make them more memory efficient or something similar (workers, cache, cleanup methods). Here's a graph of memory usage since deploying on the dyno: Can anyone who's using Heroku or may have an idea of what's happening help me on this? |
We use Heroku and don't see this, although are using dedicated instances with more memory than required. Heroku dynos cycle roughly every 24 hours too so a slow enough memory leak is not a problem. Just an idea but this could be due to different node versions - what version of node.js were you running on NodeChef and what on Heroku? |
@steven-supersolid Thanks! It was downloading an unstable version of node.js by default, it's much better now |
Hi, what node version did this occur on and what version did fix it for you? We experience similar problems since our last update. |
This issue occured on 7.6.0, Node 4.4.7 solved it |
Thanks! For me the issue occurred with node 7.7.1, node 6.10 fixed it for me. |
We saw the same issue and Node 6.10.0 solves it, however 6.10.1 causes the same issue again. |
@marnor1 Same here (we use scalingo which is very similar to heroku though). Maybe the issue should be reopened? |
Not sure if it should be reopened as it seems to be nodejs version dependant. |
WRT #3977, we're running 2.4.2 on Node 4.3 on NodeChef and are waiting for the promised ability to change Node versions so we can upgrade to 4.4.7 or 6.10. This is all very interesting that this problem could be linked to specific node versions, not just older versions of Node. Also, for those of you experiencing "increases in memory usage" - does your code do multiple queries - use the results of one query to do another query and so on? When our code does just one query and returns, memory usage is flat. But when we turn on an advanced capability that has multiple queries, that is when we see the increase in memory usage. We have the same sawtooth memory chart - except our memory usage grows too high in 2-3 hours and we have to use container restart rules based on memory usage. |
@jfalcone Have you tried to use promises instead of callbacks for transferring results from one query to parameters for another? |
Update: We run node version 6.11.1 for a day now and memory seems to be stable (no increase). @jfalcone Our cloud code is actually quite extensive, so yes we do multiple queries in a cloud function. |
We have rewritten the code in several different styles including promises and it makes no difference. We also null out all vars after use. It has nothing to do with style of programming. It has everything to do with having the cascaded queries because however you write those in, you are introducing multiple asynchronous events and that's where the trouble starts. We also took heap snapshots and did comparisons. The result of our analysis was that the increase in memory was (if we can believe the heap snapshots) in unreferenced memory - memory that should have been garbage collected which makes sense. After we heard that there are issues with garbage collection in V8 and Node, we monitored garbage collection and have observed infrequent erratic garbage collection cycles where 30-50MB gets collected at one time - which is a lot given that our containers start at about 120MB memory usage. But those aren't happening often enough so eventually the memory usage outruns the garbage collection. Later today I'll post a chart of what memory usage on our system looks like. We aren't allocating big structures so what apparently is happening is that a lot of little unreferenced items are accumulating and the garbage collector "falls behind" and never gets to them. In fact our entire Parse Mongo database is also very small (less than a megabyte of active data). So it is so ridiculous that we can eat up 512MB of memory in a few hours. Yes moving to Redis would help but that's not an option for us. Our best guess is that there are some bad combinations of ParseServer, Node, and V8 that just don't mix if you are doing non-trivial operations (e.g. cascaded queries). Since we're stuck on Node 4.3, we can't test that theory. In the meantime, I have a system that leaks 300KB+ per second and has to restart containers every 2-3 hours. |
We run bigger queries with larger load and don’t see the memory usage ramp up as much. As mentioned in the other thread, I believe node version is part responsible here. It’s also possible that we have memory leaks and retain cycles in the server and I’ll gladly fix them if we have some way to track them down |
@flovilmart I'm pretty curious. Do you manage memory in some way? |
Most important thing for your code is to find a combination of Parse server version and NodeJS version that seem to click. We were having problems until we upgraded to NodeJS 6.11 at which point the memory management and garbage collection started behaving themselves and our long term memory use became flat regardless of load. A different longer term issue (we're talking days to weeks) did show up where (we think) fragmentation of memory causes an increase in CPU usage but that's addressed by just regularly restarting your server every night as folks on Heroku do. We set a limit on CPU usage and restart when we hit that bar which is about every 2 weeks. We never hit any limits on memory usage and we could probably trim back the memory allocation on our server instances. |
we're having the same problem; does anyone know of a stable combination of nodeJS and Parse that doesn't have this memory leak issue? |
@ptan33 What version of node and Parse are you running? We had this issue a couple of years back, but that was only on node 6.x. Node 8, 10 or 12 hasn't caused any issues for us. |
we've got the latest versions of Parse-Server (3.10.0) and Node (12.16.0) after upgrading, we still have the memory leak issue sadly. we're going to use redis to cache our queries to solve the issue |
In our instance, upgrading to a newer version of node was the solution. We're currently running Parse 2.4.2 on Node 6.11.1 and we're good for a month at a time between restarts which is unusual since most of the hosting services out there restart every 24 hours. We're way behind versions but if it ain't broke... |
Issue Description
I've recently moved my Parse Server instance to Heroku and I'm seeing memory usage differences and behavior with my previous hosting service.
I had a similar setup on Nodechaf (512MB), but since I moved to Heroku my memory usage keeps increasing. Before migrating I was at around 350-400MB, now it just keeps increasing (900MB now).
I have no idea what to do to improve memory usage and I'm relatively new to Heroku itself. Is it a bug or a config I'm missing? If you need more logs or anything, could you guide on how I could provide it to you, thanks.
Environment Setup
Server
Database
Memory Usage (2 hours)
The text was updated successfully, but these errors were encountered: