-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
Version
bikeshedder@blackhole ~/projects/rust/hyper-fail master $ cargo tree | grep tokio
│ │ ├── tokio v0.2.2
│ │ │ └── tokio-macros v0.2.0
│ │ └── tokio-util v0.2.0
│ │ └── tokio v0.2.2 (*)
│ │ └── tokio v0.2.2 (*)
│ │ └── tokio v0.2.2 (*)
│ ├── tokio v0.2.2 (*)
│ └── tokio v0.2.2 (*)
└── tokio v0.2.2 (*)
Platform
bikeshedder@blackhole ~/projects/rust/hyper-fail master $ uname -a
Linux blackhole.hq.terreon.de 5.3.11-100.fc29.x86_64 #1 SMP Tue Nov 12 20:41:25 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Description
I tried writing a hyper based HTTP service that uses a Mutex inside a service method. When benchmarking the service I noticed some deadlocks where the application would hang forever waiting for the mutex to be released.
Reproduction
I have prepared a minimal reproduction example using the current master branch of hyper, tokio and nothing else. I used interval.tick() to fake external service calls (e.g. database access) so the executor gets a chance to interrupt the service function and accept requests in parallel:
https://github.com/bikeshedder/hyper-fail
cargo run --release
bash benchmark.sh # this should work
bash benchmark.sh # this should fail (if not, repeat a few times)
Output on my machine:
bikeshedder@blackhole ~/projects/rust/hyper-fail master $ ./benchmark.sh
Running 1s test @ http://localhost:3000/
4 threads and 64 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 32.36ms 1.04ms 37.11ms 73.28%
Req/Sec 488.17 51.48 640.00 80.00%
1946 requests in 1.00s, 142.53KB read
Requests/sec: 1938.53
Transfer/sec: 141.98KB
bikeshedder@blackhole ~/projects/rust/hyper-fail master $ ./benchmark.sh
Running 1s test @ http://localhost:3000/
4 threads and 64 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 0.00us 0.00us 0.00us -nan%
Req/Sec 0.00 0.00 0.00 -nan%
0 requests in 1.00s, 0.00B read
Requests/sec: 0.00
Transfer/sec: 0.00B
Analysis
I tried adding output before and after the counter.lock() call and noticed that after running the benchmark the numbers don't add up. This all makes sense because wrk does not wait for requests to be finished but terminates the connections. hyper stops executing the service future if a client terminates the connection. It seams that interrupting a future that is currently calling Mutex::lock() and dropping it can cause the mutex to never unlock again.