-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Sentry HTTP 429 errors causing AWS Lambda InvokeErrror (status 502 response) result #4582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This is the logic that we are hitting. sentry-javascript/packages/node/src/transports/base/index.ts Lines 248 to 252 in 1bf9883
Sentry though doesn't capture internal Perhaps there is a way we can edit https://github.com/getsentry/sentry-javascript/blob/master/packages/serverless/src/awslambda.ts to address that. Maybe we could monkey patch whatever aws lambda is listening for to filter out We could also try and not throw errors for 429s since they are pretty common. What do you think? |
@AbhiPrasad we was facing the same issue. Is is legit way to add additional parameter on |
@AbhiPrasad I think that unhandleable errors should not ever happen from an external library. On the current Senttry implementation. I'm not seeing a way the client could catch this error since it's happening asynchronously, a correct approach in this case would be at least having them absorbed and just logged. Maybe, log it always, or only in case of |
Yes @ichina , that probably would work fine. Thanks! |
Option released with https://github.com/getsentry/sentry-javascript/releases/tag/6.18.0 exports.handler = Sentry.AWSLambda.wrapHandler(yourHandler, {
// Ignore any errors raised by the Sentry SDK on attempts to send events to Sentry
ignoreSentryErrors: true,
}); |
Hi! I've seen this is also happening when using Lambda Layer |
You'll need to be on version |
@AbhiPrasad and if I'm on that version or above, how can I tell Sentry layer NOT to fail the Lambda on Sentry failure or Sentry 429? Thanks |
You have to supply the option into the handler, as was stated above: https://docs.sentry.io/platforms/node/guides/aws-lambda/#ignore-sentry-errors. |
@AbhiPrasad I think I may be missing something because when I use the Lambda Layer integration I don't have a handler where I can configure the options like I do when I use the library |
We've just had a fairly catastrophic failure on our lambdas due to Sentry being integrated and we didn't set the It's not easy to catch in testing before going to production and hitting higher loads of traffic as you're less likely to hit a rate limit. |
Is |
I have to agree having an error reporting package that can bring down your application is incredibly scary. I added a lambda wrapper feeling confident it would sit there silently and not interrupt the actual logic. The decision to not make |
I would love to know if a meeting has taken place to decide if sincerely a disappointed customer |
Package + Version
@sentry/browser
@sentry/node
6.17.6raven-js
raven-node
(raven for node)@sentry/serverless
6.17.6@sentry/tracing
6.17.6Version:
Description
Sometimes, due to large Sentry API usage, we are getting HTTP 429 error responses with Sentry SDK in our Node.js AWS Lambda functions.
However, the problem is that sometimes we end up getting this error caused by Sentry SDK that results in a seemingly unhandleable InvokeError (resulting in AWS lambda HTTP 502 response) which is problematic for us because our actual business logic is working just fine.
Our expectation is that internal Sentry errors should not cause an outage in our own APIs.
We didn't found any way to handle/catch this error, because it looks like it's failing outside of our Sentry calls such as
Sentry.init()
,Sentry.wrapHandler()
etc.The 429 error response we are mostly certain that is due to an exceess in our Transactions quota.
When decreasing
traceSampleRate
config inSentry.init()
from0.2
to0
, we stopped having this error.We were seeing this issue was happening consistently in random cases, seemingly the same as the value we had for
traceSampleRate
. We've also tried setting it to1
to confirm, and we could confirm a 100% error rate.So currently, the workaround we are using is disabling this entire feature by setting it to
0
, and no more unhandled Sentry 429 HTTP errors were thrown. Still, it doesn't seem to be correct solution having to disable entire features to have an external dependency not breaking my app.For completeness, this is our current config:
The text was updated successfully, but these errors were encountered: