Description
What version of Go are you using (go version
)?
$ go version go version go1.17.13 linux/amd64
Does this issue reproduce with the latest release?
I have no idea. Which one is the latest release?
What operating system and processor architecture are you using (go env
)?
go env
Output
$ go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/home/eliza/.cache/go-build" GOENV="/home/eliza/.config/go/env" GOEXE="" GOEXPERIMENT="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GOMODCACHE="/home/eliza/go/pkg/mod" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/home/eliza/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/nix/store/q4bf5z9r6mcgmngcnwy3bn192gc6r91d-go-1.17.13/share/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/nix/store/q4bf5z9r6mcgmngcnwy3bn192gc6r91d-go-1.17.13/share/go/pkg/tool/linux_amd64" GOVCS="" GOVERSION="go1.17.13" GCCGO="gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/dev/null" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/run/user/1000/go-build3066380988=/tmp/go-build -gno-record-gcc-switches"
What did you do?
When constructing a http.Server
with a value provided for ReadHeaderTimeout
, an open connection will be closed by the server if it has remained idle for that timeout. This occurs if a value is not provided for IdleTimeout
. For example, the following server will close any open connections if they have remained idle for 100 ms:
package main
import (
"fmt"
"io"
"net/http"
"time"
)
func hello(w http.ResponseWriter, req *http.Request) {
io.WriteString(w, "hello world")
}
func main() {
srv := &http.Server{
Addr: "127.0.0.1:42069",
ReadHeaderTimeout: 100 * time.Millisecond,
IdleTimeout: 0,
ReadTimeout: 0,
}
srv.Handler = http.HandlerFunc(hello)
if err := srv.ListenAndServe(); err != nil {
fmt.Printf("%v", err)
}
}
What did you expect to see?
The timer for the header read timeout should be started when a request begins (i.e., when the first byte of a new request is received). Technically the timeout should probably only start after the first line of the request has been read (e.g. GET / HTTP/1.1
) as that line is not part of the headers, but...whatever. I'm not going to nitpick over that.
What did you see instead?
The timeout considers time the connection has spent idle as part of the timeout duration. This means that the timeout, which is supposed to apply only to reading headers, also applies to all the time that has elapsed since the end of the previous request, even if that time was not spent reading headers.
Surprisingly, setting an IdleTimeout
on the server prevents this behavior.
Why this occurs
This occurs because the accept loop for HTTP 1.1 connections in the serve
function, here:
Lines 1903 to 2010 in bd56cb9
begins with a call to
readRequest
.
The readRequest
function begins by immediately setting a deadline for reading from the connection, based on the values of Server.ReadHeaderTimeout
and Server.ReadTimeout
:
Lines 958 to 969 in bd56cb9
It then tries to actually read from the connection:
Lines 977 to 982 in bd56cb9
In the accept loop in serve
, when a previous request completes, the read deadline is reset and the loop wraps around to the beginning, where it calls readRequest
again. But, resetting the read deadline here doesn't actually matter here, because readRequest
will immediately set a deadline based on the value of ReadHeaderTimeout
:
Lines 2003 to 2009 in bd56cb9
This means that time spent waiting for the initial read is limited by a deadline that's set immediately upon completion of the previous request.
Note that this does not apply when there's a value provided for Server.IdleTimeout
, since in that case, the server will attempt to peek from the connection first, in order to wait for it to become readable. That's limited by a deadline set based on the idle timeout. However, if the connection becomes readable before the idle timeout elapses, and a new request begins, that request's ReadHeaderTimeout
will (correctly) only apply to reading the new request, and not to time spent waiting for the connection to become readable.