Go version
go1.26.2
Output of go env in your module/workspace:
AR='ar'
CC='clang'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_ENABLED='0'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
CXX='clang++'
GCCGO='gccgo'
GO111MODULE='on'
GOAMD64='v2'
GOARCH='amd64'
GOAUTH='netrc'
GOBIN=''
GOCACHEPROG=''
GODEBUG=''
GOEXE=''
GOEXPERIMENT=''
GOFIPS140='off'
GOFLAGS=''
GOGCCFLAGS='-fPIC -m64 -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/wd/rwplb8_54nlczshmq1njpdnc0000gn/T/go-build29341136=/tmp/go-build -gno-record-gcc-switches'
GOINSECURE=''
GONOPROXY=''
GONOSUMDB=''
GOOS='freebsd'
GOPRIVATE=''
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOVCS=''
GOVERSION='go1.26.2'
What did you do?
We run a high-throughput Go H2 reverse proxy. After upgrading golang.org/x/net from v0.49.0 to v0.53.0, including the commit at https://go-review.googlesource.com/c/net/+/762040, we observed increases across multiple heap-allocation-related metrics.
What did you see happen?
Comparing heap profiles from two hosts running identical workloads but with two different versions of x/net points to the commit that added strings.Clone calls in hpack.(*headerFieldTable).addEntry for every HPACK header field.
Here are some relevant runtime metrics that saw regression compared to the baseline:
Metric Change%
HeapObjects +14%
HeapInuse +2.2%
HeapAlloc +2.7%
Mallocs +14%
Frees +14%
TotalAlloc (rate) +6%
GCSys +18%
Since this code path is hit from both the HPACK decoder and the encoder, in a reverse proxy setting, each request incurs 4x as many strings.Clone() calls due to the dynamic table insertions in both the inbound and outbound directions. So, for workloads with relatively high dynamic-table churn that is common in reverse proxies, the added cost of cloning exceeds the benefits of escape prevention.
It would help to understand the workload this commit was intended to optimize.
Could we target it more surgically rather than cloning unconditionally?
/cc @neild
What did you expect to see?
Neutral in terms of heap allocation behavior
Go version
go1.26.2
Output of
go envin your module/workspace:What did you do?
We run a high-throughput Go H2 reverse proxy. After upgrading golang.org/x/net from v0.49.0 to v0.53.0, including the commit at https://go-review.googlesource.com/c/net/+/762040, we observed increases across multiple heap-allocation-related metrics.
What did you see happen?
Comparing heap profiles from two hosts running identical workloads but with two different versions of x/net points to the commit that added
strings.Clonecalls inhpack.(*headerFieldTable).addEntryfor every HPACK header field.Here are some relevant runtime metrics that saw regression compared to the baseline:
Metric Change%
HeapObjects +14%
HeapInuse +2.2%
HeapAlloc +2.7%
Mallocs +14%
Frees +14%
TotalAlloc (rate) +6%
GCSys +18%
Since this code path is hit from both the HPACK decoder and the encoder, in a reverse proxy setting, each request incurs 4x as many strings.Clone() calls due to the dynamic table insertions in both the inbound and outbound directions. So, for workloads with relatively high dynamic-table churn that is common in reverse proxies, the added cost of cloning exceeds the benefits of escape prevention.
It would help to understand the workload this commit was intended to optimize.
Could we target it more surgically rather than cloning unconditionally?
/cc @neild
What did you expect to see?
Neutral in terms of heap allocation behavior