-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Poor performance or normal behaviour? #2539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It is always better to divide parse server up into multiple instance with a load balancer than using a scaled up instance. Also have you optimized your main.js file to make use of "workers" or use pm2 for deployment? |
Actually, since a AWS EC2 c4.large has two cores, we have two parser-server process. One listen on port 1338 and the other listen on port 1339. The c4.large run an haproxy listen on port 1337 and distributes requests between this local ports. We use AWS EC2 auto scaling to scaling our platform, but we are running the test against the most simple case. If we run the test against the haproxy, al work as expected. We can manage 2 concurrent 2 concurrent request without penalties (one request per core). So our cuestion is about concurrent requests against one process. |
That is very odd indeed, do you have any trace timing info about MongoDB queries and # of queries to mongo during those tests? Running a similar setup, and we don't see those latencies |
Except my process, no one is accesing to mongodb. I see 4 connections. Launching 100 requests, 16 concurrent...
I've tested my mongodb directly and is really fast. The bottleneck seems to be in parse-server. It can't manage concurrent requests. |
What you describe is very odd... we currently process about 300rps (mostly reads and _Installation writes) and the CPU on each machine is at ~30%. And this is the most vanilla as possible as we don't leverage the cores through pm2/ cluster... what'a the result on a naked development machine with local MongoDB? |
Same. I have installed a new machine (c4.large). Mongo is empty. Only one user. My main.is (Where the cloud code is going to be) is empty too. More data:Node version
The way I run parse-server
My config.json
Remember that I'm testing only logins (https://XXXXXXXXXXXX.XXXXXXXXXXXXcom:1337/parse/login). Can you manage 300 login per second? I can't understand. What is happening? parse.com need same time for a login (220 ms) too, but the service can manage concurrent requests without letting down (response always take 200/300 ms). With this data, they need almost 300 / 4 cores to get 300 logins per second. right? (Thanks for your interest. I'm pretty lost) |
I just spent some time looking into this issue. There is most definitely a performance problem with Parse Server signup/login. Some background:
The problem:
My recommendation: Looks like we can update package.json dependencies and |
The native bindings may be a good idea but may be problematic for users with limited compilation tools on the remote machine. |
@flovilmart Good point... Windows users will most certainly run into issues as well. Just came across bcrypt.js which appears to do a nice job or progressively handling hashing in a non-blocking way. |
@JeremyPlease also on heroku, app engine etc... all thin containers may have problems with those native modules. We could definitely try bcrypt.js, run ab against it quickly. Wanna run the benchmarks? |
Running some benchmarks now. Stay tuned... 📺 |
👍 |
Well... Here are some benchmarks. Ran with
Not sure where to go from here. We might just want to leave things as-is and note in the wiki/readme that Parse Server has a limitation around simultaneous login requests. |
Maybe mark is as an optional dependency, if compilation fails, we use the std slow bcrypt-nodejs (current implementation). Would that be reasonable? |
@JeremyPlease did you use saltRounds = 10 in the hash function from bcrypt ? What was the value you used? |
@flovilmart Yep, 10 saltRounds for |
I've tried the bcrypt and works really fine. To me this issue can be closed. Thanks for all. |
We'll try to do a release soon when thé PR has landed |
It definitely does not work as a straight replacement. |
@joeyslack the PR is open and that will also factor for setups where build tools are not available remotely |
@flovilmart FYI, I'm unable to login to my app (facebook connect, and also manual login method) with this new patch on master. Would love to see if it works for anyone else.... |
What is the issue that you see? FBConnect should be un-affected by that change. |
I just tried that:
And that passes, unfortunately, we can't test it on travis as the bcrypt native module won't build |
I don't have a lot of time to debug this now, but if it's working for everyone else, it's probably my VM. Will post back with more details if I find anything. |
@joeyslack we're planing to release tonight 2.2.18. We don't want anyone to be locked out... |
@flovilmart Turned out to be an issue with FileLoggerAdapter. I am using it in my app like: Had to install this dep separately. Bcrypt changes seems to work fine. 2.2.18! |
Issue Description
We in the middle of migration from parse.com to our own parse server on AWS EC2.
We are almost ready to go so we are testing the new platfrom.
We see weird behaviour. It's seems that parse-server don't works well with concurrent requests.
We have tested the whole app with jmeter and ab with the same result.
Parse-server tend to eat all free CPU. Even with a few requests.
We are using this simple script.
Where parse.json content is ...
We have done the tests with an empty database too. Our mongodb platform is really close to our parse-servers. There aren't any slow queries in our mongodb (80 ms is slow to us)
We hace un ELB with only one instance behind.
Results
10 requests. 1 concurrent request
10 requests. 2 concurrent requests
10 requests. 4 concurrent requests
Even with only two concurrent requests response times grows to double.
2 requests. 2 concurrent requests
Environment Setup
Logs/Trace
The text was updated successfully, but these errors were encountered: