-
Notifications
You must be signed in to change notification settings - Fork 10.3k
Support inline application scheduling mode #20952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@halter73 is there any dirty way I could check how this would affect the platform benchmarks using reflection or some other hacks? |
@adamsitnik reflection, no, you have to change the code here aspnetcore/src/Servers/Kestrel/Transport.Sockets/src/Internal/SocketConnection.cs Lines 71 to 72 in 02e4d1a
|
@davidfowl thank you! |
For now you could just copy the SocketTransport, then change those two lines and the name and register it. Or you could use rhtx in its inline mode if you don't need to use the Socket transport exactly. You could also look at modifying KestrelServer.ServiceContext.Scheduler to be Inline instead of ThreadPool. You should be able to grab the KestrelServer instance out of DI as an IServer. I don't think this should really matter though unless you're testing with HTTP/2-3 requests and/or chunked HTTP/1 request bodies. |
@tmds I've run all the benchmarks for 2 configurations of aspnet:
var inputOptions = new PipeOptions(MemoryPool, PipeScheduler.Inline, PipeScheduler.Inline, maxReadBufferSize.Value, maxReadBufferSize.Value / 2, useSynchronizationContext: false);
var outputOptions = new PipeOptions(MemoryPool, PipeScheduler.Inline, PipeScheduler.Inline, maxWriteBufferSize.Value, maxWriteBufferSize.Value / 2, useSynchronizationContext: false);
var inputOptions = new PipeOptions(MemoryPool, PipeScheduler.ThreadPool, PipeScheduler.Inline, maxReadBufferSize.Value, maxReadBufferSize.Value / 2, useSynchronizationContext: false);
var outputOptions = new PipeOptions(MemoryPool, PipeScheduler.Inline, PipeScheduler.ThreadPool, maxWriteBufferSize.Value, maxWriteBufferSize.Value / 2, useSynchronizationContext: false); |
I'm closing #23591 as a dupe of this. Below is the description for context. Is your feature request related to a problem? Please describe.Now that System.Net.Sockets has added a mode to inline continuations on Unix which can be enabled with If you set DOTNET_SYSTEM_NET_SOCKETS_INLINE_COMPLETIONS=1 and run the TechEmpower JSON platform benchmark with Kestrel, this degrades performance by ~12% even though there's no blocking I/O. Kestrel's own scheduling seems to negate the benefits of inlining Socket completions. If we change Kestrel's Socket transport to inline its own continuations, inlining Socket completions as well yields a ~7% RPS improvement. Here's the change I sued to test inlining in Kestrel's Socket Transport: --- a/src/Servers/Kestrel/Transport.Sockets/src/Internal/SocketConnection.cs
+++ b/src/Servers/Kestrel/Transport.Sockets/src/Internal/SocketConnection.cs
@@ -68,8 +68,8 @@ namespace Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.Internal
maxReadBufferSize ??= 0;
maxWriteBufferSize ??= 0;
- var inputOptions = new PipeOptions(MemoryPool, PipeScheduler.ThreadPool, scheduler, maxReadBufferSize.Value, maxReadBufferSize.Value / 2, useSynchronizationContext: false);
- var outputOptions = new PipeOptions(MemoryPool, scheduler, PipeScheduler.ThreadPool, maxWriteBufferSize.Value, maxWriteBufferSize.Value / 2, useSynchronizationContext: false);
+ var inputOptions = new PipeOptions(MemoryPool, PipeScheduler.Inline, PipeScheduler.Inline, maxReadBufferSize.Value, maxReadBufferSize.Value / 2, useSynchronizationContext: false);
+ var outputOptions = new PipeOptions(MemoryPool, PipeScheduler.Inline, PipeScheduler.Inline, maxWriteBufferSize.Value, maxWriteBufferSize.Value / 2, useSynchronizationContext: false);
var pair = DuplexPipe.CreateConnectionPair(inputOptions, outputOptions); And below are the benchmark results. I gave two results for each scenario to give a vague idea on the variance.
Kestrel used to have a similar concept with KestrelServerOption.ApplicationSchedulingMode, but that always used a "pubternal" SchedulingMode type and didn't affect transport-level scheduling post-2.0. When refactoring Kestrel's transport abstraction in 3.0, we removed this API. Describe the solution you'd likeWe should add a boolean property to SocketTransportOptions that can be set to tell the Socket transport to use PipeScheduler.Inline for both the readers and writers of the Input and Output pipes instead of dispatching to the ThreadPool and IOQueue like we do by default. Given that this can cause big problems, particularly if application code blocks, we should come up with a sufficiently scary name for this. Something along the lines of |
Thanks for contacting us. |
Closed via #24638 |
Using the inline scheduling mode, Pipelines can be configured to not dispatch to ThreadPool(/IoQueue).
This mode is specifically for non-blocking applications.
It's meant to be use in tandem with the inline socket continuation mode of runtime added in dotnet/runtime#34945.
cc @davidfowl @halter73
The text was updated successfully, but these errors were encountered: