Skip to content

Log the state of the connection pool once per minute #1606

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 31, 2019

Conversation

sgrif
Copy link
Contributor

@sgrif sgrif commented Jan 26, 2019

On rare occasions we've had outages occur as a result of the connection
pool refusing to return any connections. Shortly before this occurs, we
start to see significant spikes in response times (likely because the
pool has begun serving only a 1 or 2 connections and there's a large
amount of contention).

Right now we have limited capability to debug this. The only
explanations that I can come up with are a bug in r2d2, or something is
happening causing our worker threads to be killed in a way that does not
run destructors, or something is causing threads to hang indefinitely.

If one of those three options above is true, we should see connections - idle_connections
trend towards zero over time, and diverge away from
in_flight_requests over time (they aren't expected to be exactly
equal, since some requests will never attempt to get a database
connection, and we can have more requests than available connections
during normal operations)

This isn't going to be enough information to definitively figure out
what's going on here, but it should hopefully give us some information
to help figure out where to look next

On rare occasions we've had outages occur as a result of the connection
pool refusing to return any connections. Shortly before this occurs, we
start to see significant spikes in response times (likely because the
pool has begun serving only a 1 or 2 connections and there's a large
amount of contention).

Right now we have limited capability to debug this. The only
explanations that I can come up with are a bug in r2d2, or something is
happening causing our worker threads to be killed in a way that does not
run destructors, or something is causing threads to hang indefinitely.

If one of those three options above is true, we should see `connections
- idle_connections` trend towards zero over time, and diverge away from
`in_flight_requests` over time (they aren't expected to be exactly
equal, since some requests will never attempt to get a database
connection, and we can have more requests than available connections
during normal operations)

This isn't going to be enough information to definitively figure out
what's going on here, but it should hopefully give us *some* information
to help figure out where to look next
@sgrif sgrif requested a review from jtgeibel January 26, 2019 18:16
Copy link
Member

@jtgeibel jtgeibel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Here is what I'm seeing in my logs on staging after enabling the environment variable:

$ heroku logs | grep connection
2019-01-31T00:52:23.556302+00:00 app[web.1]: connection_pool_status="State { connections: 8, idle_connections: 8 }" in_flight_requests=0
2019-01-31T00:54:49.617156+00:00 app[web.1]: connection_pool_status="State { connections: 8, idle_connections: 8 }" in_flight_requests=0
2019-01-31T00:55:58.628764+00:00 app[web.1]: connection_pool_status="State { connections: 8, idle_connections: 8 }" in_flight_requests=0

@bors r+

@sgrif
Copy link
Contributor Author

sgrif commented Jan 31, 2019

@bors r=jtgeibel

@bors
Copy link
Contributor

bors commented Jan 31, 2019

📌 Commit 90edc2f has been approved by jtgeibel

@bors
Copy link
Contributor

bors commented Jan 31, 2019

⌛ Testing commit 90edc2f with merge 3b5bed6...

bors added a commit that referenced this pull request Jan 31, 2019
Log the state of the connection pool once per minute

On rare occasions we've had outages occur as a result of the connection
pool refusing to return any connections. Shortly before this occurs, we
start to see significant spikes in response times (likely because the
pool has begun serving only a 1 or 2 connections and there's a large
amount of contention).

Right now we have limited capability to debug this. The only
explanations that I can come up with are a bug in r2d2, or something is
happening causing our worker threads to be killed in a way that does not
run destructors, or something is causing threads to hang indefinitely.

If one of those three options above is true, we should see `connections - idle_connections`
trend towards zero over time, and diverge away from
`in_flight_requests` over time (they aren't expected to be exactly
equal, since some requests will never attempt to get a database
connection, and we can have more requests than available connections
during normal operations)

This isn't going to be enough information to definitively figure out
what's going on here, but it should hopefully give us *some* information
to help figure out where to look next
@bors
Copy link
Contributor

bors commented Jan 31, 2019

💔 Test failed - checks-travis

@sgrif
Copy link
Contributor Author

sgrif commented Jan 31, 2019 via email

@bors
Copy link
Contributor

bors commented Jan 31, 2019

⌛ Testing commit 90edc2f with merge 6e6eeb6...

bors added a commit that referenced this pull request Jan 31, 2019
Log the state of the connection pool once per minute

On rare occasions we've had outages occur as a result of the connection
pool refusing to return any connections. Shortly before this occurs, we
start to see significant spikes in response times (likely because the
pool has begun serving only a 1 or 2 connections and there's a large
amount of contention).

Right now we have limited capability to debug this. The only
explanations that I can come up with are a bug in r2d2, or something is
happening causing our worker threads to be killed in a way that does not
run destructors, or something is causing threads to hang indefinitely.

If one of those three options above is true, we should see `connections - idle_connections`
trend towards zero over time, and diverge away from
`in_flight_requests` over time (they aren't expected to be exactly
equal, since some requests will never attempt to get a database
connection, and we can have more requests than available connections
during normal operations)

This isn't going to be enough information to definitively figure out
what's going on here, but it should hopefully give us *some* information
to help figure out where to look next
@bors
Copy link
Contributor

bors commented Jan 31, 2019

☀️ Test successful - checks-travis
Approved by: jtgeibel
Pushing 6e6eeb6 to master...

@bors bors merged commit 90edc2f into rust-lang:master Jan 31, 2019
@sgrif sgrif deleted the sg-log-connection-pool-status branch March 9, 2019 01:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants