-
Notifications
You must be signed in to change notification settings - Fork 608
Performance: ASP.NET Core Tuning #160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
That's a fairly low baseline. What environment was the test performed in? |
@DamianEdwards the runner is on our VM tier in CO (currently getting a revamp to Hyper-V) - so we're in flux atm, but results are consistent. The Kestrel server shouldn't be a weak machine; that tier currently has dual E5-2690v3 processors @ 3.07GHz and 64GB of RAM, with 20Gb of and ~0.17ms of latency between. The bottleneck appears to be a pegged CPU on the 2012 R2 physical machine Kestrel's on. I need to see what's capping out so hard there for throughput, but should be somewhat valid for an apples/apples rough impact until I can spare time to dig into it. Any suggestions/common things you guys are seeing that may save some digging time? |
Ethernet specs? |
@DamianEdwards VM is an aggregate trunk, F630s with internal 2x 10Gb (and idle), into dual IOAs each with 4x 10Gb uplinks, so each FX2s chassis has 80Gbps of aggregate, of which the VM can access 20Gb. The web server has a X540 NDC, so 2x 10Gb to the network. All systems are active/active LACP. I can go into switch details if you want, but it seems we're limited by CPU here (99-100% pegged for the duration). I've just maxed out Tx/Rx queue sizes which somehow regressed in a recent change (will have out guys track that cause down) and re-ran, with no impact. |
We've seen many cases where seemingly CPU saturated loads are actually bottlenecked somewhere else and able to yield many times more the RPS. With machines of that spec you should be seeing vastly higher baseline numbers for the plaintext test. |
Agreed - and in earlier previews I was getting far higher, this was on the exact same hardware back in November: aspnet/KestrelHttpServer#386 (comment) |
There's a severe network issue we're tracking down (and by we, I mean @GABeech by himself while I'm out this week)...we'll re-test when we isolate what's happening with the central network and see how much of a factor it is. |
Lots of perf tuning in the last pass, this is as good as it gets for now, going to revisit after the view wrapping is removed (if we can do that, with diagnostics events having concrete types...we hope). Each MiniProfiler (if created) is about 400 bytes with just a root timing and all associated information, not too bad. The |
One of the goals of MiniProfiler has always been to add as little overhead as possible to get our timings. Luckily the ASP.NET team has a Benchmarks repo (aspnet/benchmarks). I've created a minimal fork to add MiniProfiler (I need to discuss with that team if this is something that's even welcome, and if so how we'd want to set it up) - that fork is here: NickCraver/benchmarks.
Here's a current benchmark comparison of aspnet/benchmarks with and without MiniProfiler Middleware activated to get a general idea of overhead as a current baseline.
Without (151,417.5 RPS over 4 runs)
With MiniProfiler Enabled (115,415.35 RPS over 4 runs)
So it takes us from about 151,417.5 RPS down to 115,415.35 RPS, or a 23.8% overhead/cost on the
/plaintext
test. Let's see if we can get that down a bit. This issue is for tracking and suggestions.The text was updated successfully, but these errors were encountered: