Description
Which version of Elastic are you using?
[x] elastic.v7 (for Elasticsearch 7.x)
[ ] elastic.v6 (for Elasticsearch 6.x)
[ ] elastic.v5 (for Elasticsearch 5.x)
[ ] elastic.v3 (for Elasticsearch 2.x)
[ ] elastic.v2 (for Elasticsearch 1.x)
I have been doing many .Get
requests in a loop (10k per second or so) and I noticed that soon I get dial tcp 172.17.0.2:9200: connect: cannot assign requested address
error. After debugging and trying different things (fixing SetURL
to a fixed value, disabling SetSniff
) I realized that at one point idle connections just start growing and growing (I used nestat
and grep :9200
in there), to 30k or so, after which the program crashed with the error above.
I tried different things, e.g., adding another defer
to PerformRequest
, because body has to be both fully consumed and closed for it to be returned to the pool (I suspected that when LimitReader
is used, body is not fully read):
defer res.Body.Close()
defer io.Copy(ioutil.Discard, res.Body)
I also read suggestions from Docker wiki page, but in my case it does not apply, my program can access ES at both 127.0.0.1:9200
and 172.17.0.2:9200
. It does happen that it starts with 127.0.0.1:9200
and then sniffing changes it to 172.17.0.2:9200
, which makes it have bunch of idle connections, which can make the issue reported here happen faster (this might be fixed with #1507, I haven't tested it), but fundamentally it is not the reason it collects 30k idle connections, only factor 2x for a bit.
But it did not help. What it did solve the issue at the end was using cleanhttp.DefaultPooledClient
as default client:
client, err := elastic.NewClient(
elastic.SetHttpClient(cleanhttp.DefaultPooledClient()),
)
I suspect it addresses the issue because it has larger MaxIdleConnsPerHost
value.
So maybe this should be the default instead of http.DefaultClient
?