-
-
Notifications
You must be signed in to change notification settings - Fork 656
Request Entity Too Large #604
Comments
Is this getsentry.com or are you hosting it yourself? |
Getsentry.com On Tuesday, April 28, 2015, David Cramer [email protected] wrote:
|
This would mean the packet is very large. I think we should do something on our end around logging, but it's a little complicated. We definitely should log a reasonable version of the message (beyond just what we give you here). Unfortunately without changes to the client I don't have a great answer. |
Hm, ok. What composes the packet? I am working with a number large files, Happy to test out or make changes as advised. On Tuesday, April 28, 2015, David Cramer [email protected] wrote:
|
Closing this out as it was (potentially) resolved elsewhere |
nice, thank you |
It was? I'm still experiencing 413 responses from getsentry.com when POSTing a 5.3MB payload. |
@jmagnusson do you see this in the 5.6 release? |
Yup, still. |
So this is still an issue, although thanks to your help earlier/elsewhere, it is being logged sufficiently locally. A believe a couple actions should be taken:
|
Seeing this as well from the error dump of a complex elasticsearch query. |
It still hasn't been resolved? |
Just got hit by this issue (not related to the python lib though). What's the maximum body size allowed by sentry? |
Most of the time this is due to the configuration of the server. If this is getsentry.com the request limit is extremely high and there's no way you should be hitting it. If you are, then it likely means that the client side variable trimming isn't working correctly, or has been re-configured to send data that is too large. |
This is getsentry.com and the issue is that the metadata I've submitted is simply too large and not due to an error in trimming (it exceeds 1Mb). @pehrlich mentioned that the sentry server could log an exception whenever a request that contains a large body is discarded, but that would require processing the whole message to find out the id which would defeat the original protection in place - unless, of course, the id can be obtained in another way (query parameter?) while still discarding an offending body. The second @pehrlich's suggestion is inline we something that I wanted to suggest too - let the client preemptively calculate the body size and remove certain information (e.g. all metadata?) so that at least the most relevant pieces still won't go unnoticed. |
Any news here? We are still getting this error 413 with getsentry.com. |
We were having the same problem on self-hosted sentry. Of course, the problem was in request size limit in Nginx. |
We are experiencing this issue with In our case, it's also seems to be related to elasticsearch logging. |
Still getting request entity too large:
Any chance this could get prioritized? I'm missing exceptions from some of our most important endpoints because of it. |
I think we will need to do some better pruning for the breadcrumbs which I assume are triggering this in the case mentioned above. |
@mitsuhiko Any hints as to when this might be fixed? |
I'm going to take a look at this. |
I am also experiencing this issue. The excessive size in my case is certainly coming from the breadcrumbs |
(that would be sql queries etc. Just want to get an idea where to trim here) |
I am running into the same issue on a React/Redux app:
Is 183kb considred too large? |
@fcoury The limit seems to be 100kb https://docs.sentry.io/learn/quotas/#attributes-limits @mitsuhiko mistag 😉 Currently investigating, should have more info later today or tomorrow. |
Having a request that is around 111KB dropped by |
Experiencing same problem with
|
I have |
This issue almost 3 years old now. Will it ever be fixed? I've always loved Sentry and felt confident in that it will notify us of any exceptions that occur in production, but this issue scuffs that feeling a bit. |
@jmagnusson it should be very rare you'd see this, and might be more commonly happening with on-premise installs since max size is configurable. If you're seeing it on sentry.io let us know and we can probably just duct tape packet size in the mean time. |
@dcramer We're using your hosted solution |
@dcramer I'm using sentry.io and I'm getting it too. |
@jmagnusson @peterbe are you still experiencing this? |
As of 8 days ago I was. See my previous comment. |
I meant as in right now. We made changes yesterday which should have reduced the impact. |
@dcramer Yes, but because I don't know how to corner this, here's what I did: I injected some print in - return opener.open(url, data, timeout)
+ try:
+ return opener.open(url, data, timeout)
+ except Exception:
+ print("EXCEPTION!!!!", url, url.get_full_url(), data[:1000])
+ raise Then I hawkishly watch my uwsgi log file and I get this:
|
@peterbe any chance you could try to capture data to a file and email that to david at sentry.io? This might be something where its genuinely creating a gigantic packet. The SDK is supposed to limit the length of everything, but thats not always been true. |
Aside, you can sign with PGP key if you're concerned: https://sentry.io/security/ |
@dcramer I don't think it needs to be signed up I've written a little hack now to generate these data files. I'll email you one if I get another exception. At the time of writing I haven't had one happen in a while. |
The problem in this specific example is breadcrumbs are not truncating log messages. In this case theres an elasticsearch logger (not sure if yours or the official library), and an unbelievably large debug message which seems to be the full payload being inserted. |
Stock Django 1.11 with no fancy logging specifically for elasticsearch_dsl. I appreciate that sometimes Elasticsearch barfs but there should still be an error sent to Sentry. I do use this little snippet for calling elasticsearch_dsl: from elasticsearch.exceptions import (
ConnectionTimeout,
ConnectionError,
TransportError,
NotFoundError,
)
_es_operational_exceptions = (
ConnectionTimeout,
ConnectionError,
TransportError,
)
def es_retry(callable, *args, **kwargs):
sleep_time = kwargs.pop('_sleep_time', 1)
attempts = kwargs.pop('_attempts', 10)
verbose = kwargs.pop('_verbose', False)
try:
return callable(*args, **kwargs)
except _es_operational_exceptions as exception:
if attempts:
attempts -= 1
if verbose:
print("ES Retrying ({} {}) {}".format(
attempts,
sleep_time,
exception
))
time.sleep(sleep_time)
else:
raise I don't see those |
@peterbe sorry I should have been clear that i was intending to fix it, but the behavior (of using debug messages to log e.g. raw packets) is poor in general. ES certainly isnt alone in that. |
any chance of deploying a new version of |
@d-dorazio will be out by tomorrow. |
I am still experiencing this issue even after updating to version 6.10.0. |
I am having this error working with next.js and "@sentry/react": version 5.20.1 with the default configuration of React |
For reference, @davidpaley apparently solved his problem here: getsentry/sentry-javascript#2798 |
Recently started getting the follow error, without any explicit change to Raven or Sentry configuration. Google results are unrevealing. Is there something known that can cause this?
Any proper ways of logging more fully the original
AttributeError
, which caused the Sentry error, besides disabling Sentry?The text was updated successfully, but these errors were encountered: