-
Notifications
You must be signed in to change notification settings - Fork 523
Conversation
Resolves aspnet#336
8bc60db
to
b449da0
Compare
Also resolves #259 |
b449da0
to
1dea9f9
Compare
I agree that #336 and #259 are likely the same issue. I had considered trying a fix like this, but I don't think checking
This isn't the worst thing as long as long as we wrap the ODE in an I actually modified the AsyncCanBeSent test to try to force the AV and ran it for a couple hours today, but I had no luck:
I think there's more to this than meets the eye. |
@@ -199,6 +200,9 @@ public void async_init(UvLoopHandle loop, UvAsyncHandle handle, uv_async_cb cb) | |||
protected Func<UvAsyncHandle, int> _uv_async_send; | |||
public void async_send(UvAsyncHandle handle) | |||
{ | |||
// Can't Assert with .Validate as that checks threadId | |||
// and this function is to post to correct thread. | |||
Debug.Assert(!handle.IsInvalid, "Handle is invalid"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this was intended to prevent AVs, it could only have any affect on debug builds. I don't think even Trace.Assert would save us from the AVs we've been seeing.
This might be nice to have, but I doubt this fixes any bugs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Helped me track down the bug; only needed for debug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you put just this change in and run ./build in Windows; debug when the Assert pops up you can clearly see what's going on.
Its when loop has been shutdown so trying to queue |
The tests will cause this to occur more than I imagine it would happen in the wild as I think it only occurs on shutdown. |
Basically this fix is don't post to terminated loop; but the socket is also closed and is a more local check. |
This is too annoying, rebasing other back PRs onto it. |
@davidfowl That what "callers know" is normally the |
@Tragetaschen not really, the problem is caused by Kestrel being well behaved; and the client being local and 0 ms away. So Kestrel waits for the "socket shutdown" message to be "on the wire" before tearing down the socket, otherwise the client will never receive it and will remain in a connected state; however the client receives that message instantly so initiates its own shutdown, then libuv cleans up all before the confirm of the message being on the wire comes back; at which point Kestrel then tries to dispose the socket and fails because everything is already gone; which includes the libuv loop as its being shut down. |
@Tragetaschen also looks like your other issue is addressed by this: |
If you take my I think the call to I think for now it's best to catch the potential ODEs in We can then get rid of @lodejard @davidfowl @benaadams What do you all think? Here's my first attempt at a PR: #347 |
Resolves #336