Skip to content

Memory usage keeps increasing (Heroku) #3576

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
UnlikelySassou opened this issue Feb 27, 2017 · 21 comments
Closed

Memory usage keeps increasing (Heroku) #3576

UnlikelySassou opened this issue Feb 27, 2017 · 21 comments

Comments

@UnlikelySassou
Copy link

UnlikelySassou commented Feb 27, 2017

Issue Description

I've recently moved my Parse Server instance to Heroku and I'm seeing memory usage differences and behavior with my previous hosting service.

I had a similar setup on Nodechaf (512MB), but since I moved to Heroku my memory usage keeps increasing. Before migrating I was at around 350-400MB, now it just keeps increasing (900MB now).

I have no idea what to do to improve memory usage and I'm relatively new to Heroku itself. Is it a bug or a config I'm missing? If you need more logs or anything, could you guide on how I could provide it to you, thanks.

Environment Setup

  • Server

    • parse-server version : 2.2.23 [UPDATE based on comments: 2.3.6]
    • Hardware: Standard-1x (512 MB)
    • Localhost or remote server?: Heroku
  • Database

    • MongoDB version: 3.0.14
    • Hardware: Shared Cluster
    • Localhost or remote server? mLab

Memory Usage (2 hours)

memory

@UnlikelySassou UnlikelySassou changed the title Memory usage keeps increasing Memory usage keeps increasing (Heroku) Feb 27, 2017
@flovilmart
Copy link
Contributor

Try updating to 2.3.6, that may help, do you have a lot of users/roles?

@UnlikelySassou
Copy link
Author

UnlikelySassou commented Feb 27, 2017

I've deployed to Heroku from parse-server-example using mLab's tutorial.
I've manually changed "parse-server": "~2.3.6". in parse-server-example/package.json so I should be on the latest version now.

I do have a good number of users, but not active all at once. I don't have many roles. What I don't really understand is the difference in memory usage and the fact that it keeps increasing no matter what.

I've contacted Heroku support about this memory issue and here is there answer:

Hi there,

Thanks for reaching out. R14 errors can be a very serious problem for any application. Once your app starts using swap space, it goes extremely slow due to how virtual memory is managed. This typically leads to H12 errors (request timeout) as your app cannot service the request within 30 seconds.

While R14s can typically be temporarily resolved by restarting your app, these are typically caused by memory leaks in your application, causing your memory to slowly grow. The more traffic your applications handles, the more quickly you will run out of memory. I would highly recommend adding some memory profiling to your application as well as reading through some of our node optimization docs.

Do you think this is due to a memory leak or that these node optimizations are needed for the parse server instance on Heroku? I'm still new to Heroku and I don't know if there is a specific way to configure dynos to make them more memory efficient or something similar (workers, cache, cleanup methods).

Here's a graph of memory usage since deploying on the dyno:
heroku3
The memory usage keeps increasing and the dyno is forced to restart when it reaches 1GB. It usually takes between 4 and 6 hours to reach that limit.

Can anyone who's using Heroku or may have an idea of what's happening help me on this?

@steven-supersolid
Copy link
Contributor

We use Heroku and don't see this, although are using dedicated instances with more memory than required. Heroku dynos cycle roughly every 24 hours too so a slow enough memory leak is not a problem.

Just an idea but this could be due to different node versions - what version of node.js were you running on NodeChef and what on Heroku?

@UnlikelySassou
Copy link
Author

@steven-supersolid Thanks! It was downloading an unstable version of node.js by default, it's much better now

@dpoetzsch
Copy link

Hi, what node version did this occur on and what version did fix it for you? We experience similar problems since our last update.

@UnlikelySassou
Copy link
Author

This issue occured on 7.6.0, Node 4.4.7 solved it

@dpoetzsch
Copy link

Thanks! For me the issue occurred with node 7.7.1, node 6.10 fixed it for me.

@marnor1
Copy link

marnor1 commented Mar 23, 2017

We saw the same issue and Node 6.10.0 solves it, however 6.10.1 causes the same issue again.

@dpoetzsch
Copy link

dpoetzsch commented Mar 23, 2017

@marnor1 Same here (we use scalingo which is very similar to heroku though). Maybe the issue should be reopened?

@flovilmart
Copy link
Contributor

Not sure if it should be reopened as it seems to be nodejs version dependant.

@jfalcone
Copy link

jfalcone commented Jul 13, 2017

WRT #3977, we're running 2.4.2 on Node 4.3 on NodeChef and are waiting for the promised ability to change Node versions so we can upgrade to 4.4.7 or 6.10. This is all very interesting that this problem could be linked to specific node versions, not just older versions of Node.

Also, for those of you experiencing "increases in memory usage" - does your code do multiple queries - use the results of one query to do another query and so on?

When our code does just one query and returns, memory usage is flat. But when we turn on an advanced capability that has multiple queries, that is when we see the increase in memory usage. We have the same sawtooth memory chart - except our memory usage grows too high in 2-3 hours and we have to use container restart rules based on memory usage.

@johanarnor
Copy link

@jfalcone Have you tried to use promises instead of callbacks for transferring results from one query to parameters for another?

@dpoetzsch
Copy link

dpoetzsch commented Jul 13, 2017

Update: We run node version 6.11.1 for a day now and memory seems to be stable (no increase).

@jfalcone Our cloud code is actually quite extensive, so yes we do multiple queries in a cloud function.
@johanarnor We exclusively use promises in our code.

@jfalcone
Copy link

We have rewritten the code in several different styles including promises and it makes no difference. We also null out all vars after use. It has nothing to do with style of programming. It has everything to do with having the cascaded queries because however you write those in, you are introducing multiple asynchronous events and that's where the trouble starts.

We also took heap snapshots and did comparisons. The result of our analysis was that the increase in memory was (if we can believe the heap snapshots) in unreferenced memory - memory that should have been garbage collected which makes sense.

After we heard that there are issues with garbage collection in V8 and Node, we monitored garbage collection and have observed infrequent erratic garbage collection cycles where 30-50MB gets collected at one time - which is a lot given that our containers start at about 120MB memory usage. But those aren't happening often enough so eventually the memory usage outruns the garbage collection. Later today I'll post a chart of what memory usage on our system looks like. We aren't allocating big structures so what apparently is happening is that a lot of little unreferenced items are accumulating and the garbage collector "falls behind" and never gets to them.

In fact our entire Parse Mongo database is also very small (less than a megabyte of active data). So it is so ridiculous that we can eat up 512MB of memory in a few hours. Yes moving to Redis would help but that's not an option for us.

Our best guess is that there are some bad combinations of ParseServer, Node, and V8 that just don't mix if you are doing non-trivial operations (e.g. cascaded queries). Since we're stuck on Node 4.3, we can't test that theory. In the meantime, I have a system that leaks 300KB+ per second and has to restart containers every 2-3 hours.

@flovilmart
Copy link
Contributor

We run bigger queries with larger load and don’t see the memory usage ramp up as much. As mentioned in the other thread, I believe node version is part responsible here. It’s also possible that we have memory leaks and retain cycles in the server and I’ll gladly fix them if we have some way to track them down

@thisfiore
Copy link

@flovilmart I'm pretty curious. Do you manage memory in some way?
I've experience in memory management in Swift and there are specific things to do.
Do you have any best practice?
Also in my experience the Parse Server memory usage keeps increasing and never gets free.

@jfalcone
Copy link

Most important thing for your code is to find a combination of Parse server version and NodeJS version that seem to click. We were having problems until we upgraded to NodeJS 6.11 at which point the memory management and garbage collection started behaving themselves and our long term memory use became flat regardless of load.

A different longer term issue (we're talking days to weeks) did show up where (we think) fragmentation of memory causes an increase in CPU usage but that's addressed by just regularly restarting your server every night as folks on Heroku do. We set a limit on CPU usage and restart when we hit that bar which is about every 2 weeks. We never hit any limits on memory usage and we could probably trim back the memory allocation on our server instances.

@txprt
Copy link

txprt commented Feb 12, 2020

we're having the same problem; does anyone know of a stable combination of nodeJS and Parse that doesn't have this memory leak issue?

@johanarnor
Copy link

@ptan33 What version of node and Parse are you running? We had this issue a couple of years back, but that was only on node 6.x. Node 8, 10 or 12 hasn't caused any issues for us.

@txprt
Copy link

txprt commented Feb 12, 2020

we've got the latest versions of Parse-Server (3.10.0) and Node (12.16.0)

after upgrading, we still have the memory leak issue sadly. we're going to use redis to cache our queries to solve the issue

@jfalcone
Copy link

In our instance, upgrading to a newer version of node was the solution. We're currently running Parse 2.4.2 on Node 6.11.1 and we're good for a month at a time between restarts which is unusual since most of the hosting services out there restart every 24 hours. We're way behind versions but if it ain't broke...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants