Skip to content
This repository was archived by the owner on Oct 23, 2023. It is now read-only.

Request Entity Too Large #604

Closed
pehrlich opened this issue Apr 29, 2015 · 50 comments
Closed

Request Entity Too Large #604

pehrlich opened this issue Apr 29, 2015 · 50 comments

Comments

@pehrlich
Copy link

Recently started getting the follow error, without any explicit change to Raven or Sentry configuration. Google results are unrevealing. Is there something known that can cause this?

Any proper ways of logging more fully the original AttributeError, which caused the Sentry error, besides disabling Sentry?

INFO:werkzeug:172.17.42.1 - - [29/Apr/2015 01:02:51] "POST / HTTP/1.1" 500 -
ERROR:sentry.errors:Unable to reach Sentry log server: HTTP Error 413: Request Entity Too Large (url: https://app.getsentry.com/api/39671/store/, body: b'<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body bgcolor="white">\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n')
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/site-packages/raven/transport/threaded.py", line 159, in send_sync
    super(ThreadedHTTPTransport, self).send(data, headers)
  File "/usr/local/lib/python3.4/site-packages/raven/transport/http.py", line 49, in send
    ca_certs=self.ca_certs,
  File "/usr/local/lib/python3.4/site-packages/raven/utils/http.py", line 62, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/local/lib/python3.4/urllib/request.py", line 461, in open
    response = meth(req, response)
  File "/usr/local/lib/python3.4/urllib/request.py", line 571, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/local/lib/python3.4/urllib/request.py", line 499, in error
    return self._call_chain(*args)
  File "/usr/local/lib/python3.4/urllib/request.py", line 433, in _call_chain
    result = func(*args)
  File "/usr/local/lib/python3.4/urllib/request.py", line 579, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 413: Request Entity Too Large
ERROR:sentry.errors:Failed to submit message: "AttributeError: 'dict' object has no attribute 'encode'"
@dcramer
Copy link
Member

dcramer commented Apr 29, 2015

Is this getsentry.com or are you hosting it yourself?

@pehrlich
Copy link
Author

Getsentry.com

On Tuesday, April 28, 2015, David Cramer [email protected] wrote:

Is this getsentry.com or are you hosting it yourself?


Reply to this email directly or view it on GitHub
#604 (comment)
.

@dcramer
Copy link
Member

dcramer commented Apr 29, 2015

This would mean the packet is very large. I think we should do something on our end around logging, but it's a little complicated. We definitely should log a reasonable version of the message (beyond just what we give you here). Unfortunately without changes to the client I don't have a great answer.

@pehrlich
Copy link
Author

Hm, ok. What composes the packet? I am working with a number large files,
perhaps one is getting itself attached to the error report somehow, and we
definetly wouldn't want or need that.

Happy to test out or make changes as advised.

On Tuesday, April 28, 2015, David Cramer [email protected] wrote:

This would mean the packet is very large. I think we should do
something on our end around logging, but it's a little complicated. We
definitely should log a reasonable version of the message (beyond just what
we give you here). Unfortunately without changes to the client I don't have
a great answer.


Reply to this email directly or view it on GitHub
#604 (comment)
.

@dcramer
Copy link
Member

dcramer commented Apr 30, 2015

Closing this out as it was (potentially) resolved elsewhere

@dcramer dcramer closed this as completed Apr 30, 2015
@pehrlich
Copy link
Author

nice, thank you

@jacobsvante
Copy link
Contributor

It was? I'm still experiencing 413 responses from getsentry.com when POSTing a 5.3MB payload.
Perhaps you could log something about the 413 in Sentry when this happens so one knows an error occurred. Took some digging in syslogs to find the error.

@dcramer
Copy link
Member

dcramer commented Aug 27, 2015

@jmagnusson do you see this in the 5.6 release?

@jacobsvante
Copy link
Contributor

Yup, still. Sentry responded with an error: HTTP Error 413: Request Entity Too Large (url: https://app.getsentry.com/api/.../store/)

@pehrlich
Copy link
Author

So this is still an issue, although thanks to your help earlier/elsewhere, it is being logged sufficiently locally.

A believe a couple actions should be taken:

  • Upon receiving a too large entity, the server should log a proxy exception, alerting the user that an event happened which was un-capturable. This would prevent silent doom.
  • An API should be devised to all the automatic or selective truncation of large strings when errors are thrown. E.g., if I have an attribute error on a dictionary with a large string key, in many cases I don't have any need to know the contents of that key, beyond perhaps the first 100 chars to indicate its further contents.

@jhubert
Copy link

jhubert commented Mar 9, 2016

Seeing this as well from the error dump of a complex elasticsearch query.

@jacobsvante
Copy link
Contributor

It still hasn't been resolved?

@ruimarinho
Copy link

Just got hit by this issue (not related to the python lib though). What's the maximum body size allowed by sentry?

@dcramer
Copy link
Member

dcramer commented Apr 11, 2016

Most of the time this is due to the configuration of the server. If this is getsentry.com the request limit is extremely high and there's no way you should be hitting it. If you are, then it likely means that the client side variable trimming isn't working correctly, or has been re-configured to send data that is too large.

@ruimarinho
Copy link

This is getsentry.com and the issue is that the metadata I've submitted is simply too large and not due to an error in trimming (it exceeds 1Mb). @pehrlich mentioned that the sentry server could log an exception whenever a request that contains a large body is discarded, but that would require processing the whole message to find out the id which would defeat the original protection in place - unless, of course, the id can be obtained in another way (query parameter?) while still discarding an offending body.

The second @pehrlich's suggestion is inline we something that I wanted to suggest too - let the client preemptively calculate the body size and remove certain information (e.g. all metadata?) so that at least the most relevant pieces still won't go unnoticed.

@evdoks
Copy link

evdoks commented Jun 24, 2016

Any news here? We are still getting this error 413 with getsentry.com.

@jakubzitny
Copy link

We were having the same problem on self-hosted sentry. Of course, the problem was in request size limit in Nginx.

@joar
Copy link
Contributor

joar commented Oct 5, 2016

We are experiencing this issue with raven==5.36.0, using raven.transport.http.HTTPTransport, on https://app.getsentry.com/api/32671/store/.

In our case, it's also seems to be related to elasticsearch logging.

@jacobsvante
Copy link
Contributor

Still getting request entity too large:

2017-10-06 04:02:45,810|sentry.errors|ERROR|Sentry responded with an error: HTTP Error 413: Request Entity Too Large (url: https://sentry.io/api/<id>/store/) (base.py:682)

Any chance this could get prioritized? I'm missing exceptions from some of our most important endpoints because of it.

@mitsuhiko
Copy link
Contributor

I think we will need to do some better pruning for the breadcrumbs which I assume are triggering this in the case mentioned above.

@jacobsvante
Copy link
Contributor

@mitsuhiko Any hints as to when this might be fixed?

@mitsuhiko
Copy link
Contributor

I'm going to take a look at this.

@geuben
Copy link

geuben commented Nov 21, 2017

I am also experiencing this issue. The excessive size in my case is certainly coming from the breadcrumbs

@mitsuhiko
Copy link
Contributor

(that would be sql queries etc. Just want to get an idea where to trim here)

@fcoury
Copy link

fcoury commented Dec 7, 2017

I am running into the same issue on a React/Redux app:

Content-Length:183107

Is 183kb considred too large?

@philtrep
Copy link

philtrep commented Dec 7, 2017

@fcoury The limit seems to be 100kb https://docs.sentry.io/learn/quotas/#attributes-limits

@mitsuhiko mistag 😉 Currently investigating, should have more info later today or tomorrow.

@notatestuser
Copy link

Having a request that is around 111KB dropped by sentry.io/api. That limit seems a little low - we simply log all of our redux actions and send them as breadcrumbs.

@whisller
Copy link

whisller commented Jan 3, 2018

Experiencing same problem with raven==6.1.0 + raven-python-lambda==0.1.4.

Sentry responded with an error: HTTP Error 413: Request Entity Too Large (url: https://sentry.io/api/211040/store/)

@peterbe
Copy link

peterbe commented Jan 25, 2018

I have raven==0.6.4 and Django 1.17. Saw these errors in my uwsgi log too. No idea what to do about it.
I injected a little logging into the lib/python3.5/site-packages/raven/utils/http.py just to get an insight into what that data is. I still don't know what it is other than it's some binary blob. It's size hovers around 100KB.
I'm using sentry.io.

@jacobsvante
Copy link
Contributor

This issue almost 3 years old now. Will it ever be fixed?

I've always loved Sentry and felt confident in that it will notify us of any exceptions that occur in production, but this issue scuffs that feeling a bit.

@dcramer
Copy link
Member

dcramer commented Feb 1, 2018

@jmagnusson it should be very rare you'd see this, and might be more commonly happening with on-premise installs since max size is configurable. If you're seeing it on sentry.io let us know and we can probably just duct tape packet size in the mean time.

@jacobsvante
Copy link
Contributor

@dcramer We're using your hosted solution

@peterbe
Copy link

peterbe commented Feb 1, 2018

@dcramer I'm using sentry.io and I'm getting it too.

@dcramer
Copy link
Member

dcramer commented Feb 1, 2018

@jmagnusson @peterbe are you still experiencing this?

@peterbe
Copy link

peterbe commented Feb 2, 2018

are you still experiencing this?

As of 8 days ago I was. See my previous comment.
I can provide you more details but in private in case I do something silly that reveals too much.

@dcramer
Copy link
Member

dcramer commented Feb 2, 2018

I meant as in right now. We made changes yesterday which should have reduced the impact.

@peterbe
Copy link

peterbe commented Feb 2, 2018

@dcramer Yes, but because I don't know how to corner this, here's what I did:
(Note: I'm doing this on a side-project so attention is restricted)

I injected some print in site-packages/raven/utils/http.py towards the end:

-   return opener.open(url, data, timeout)
+   try:
+        return opener.open(url, data, timeout)
+   except Exception:
+        print("EXCEPTION!!!!", url, url.get_full_url(), data[:1000])
+        raise

Then I hawkishly watch my uwsgi log file and I get this:

EXCEPTION!!!! <urllib.request.Request object at 0x7fd9103efb38> https://sentry.io/api/196160/store/ b'x\x9c\xec\xbdiw\xe2H\xb20\xfcW\xd4\xf5~p\xd5)\x97\xd9\xb7\xea;\xb7\x0f\x8b0xe\xf3\x82\xa7\xe6\xf4MP"\xc9\x08\t$
Sentry responded with an error: HTTP Error 413: Request Entity Too Large (url: https://sentry.io/api/196160/store/)
Traceback (most recent call last):
  File "/var/lib/django/songsearch/venv/lib/python3.5/site-packages/raven/transport/threaded.py", line 165, in send_sync
    super(ThreadedHTTPTransport, self).send(url, data, headers)
  File "/var/lib/django/songsearch/venv/lib/python3.5/site-packages/raven/transport/http.py", line 43, in send
    ca_certs=self.ca_certs,
  File "/var/lib/django/songsearch/venv/lib/python3.5/site-packages/raven/utils/http.py", line 67, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python3.5/urllib/request.py", line 472, in open
    response = meth(req, response)
  File "/usr/lib/python3.5/urllib/request.py", line 582, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python3.5/urllib/request.py", line 510, in error
    return self._call_chain(*args)
  File "/usr/lib/python3.5/urllib/request.py", line 444, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.5/urllib/request.py", line 590, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 413: Request Entity Too Large
['OSError: write error']

@dcramer
Copy link
Member

dcramer commented Feb 2, 2018

@peterbe any chance you could try to capture data to a file and email that to david at sentry.io? This might be something where its genuinely creating a gigantic packet. The SDK is supposed to limit the length of everything, but thats not always been true.

@dcramer
Copy link
Member

dcramer commented Feb 2, 2018

Aside, you can sign with PGP key if you're concerned: https://sentry.io/security/

@peterbe
Copy link

peterbe commented Feb 2, 2018

@dcramer I don't think it needs to be signed up I've written a little hack now to generate these data files. I'll email you one if I get another exception. At the time of writing I haven't had one happen in a while.

@dcramer
Copy link
Member

dcramer commented Feb 2, 2018

The problem in this specific example is breadcrumbs are not truncating log messages. In this case theres an elasticsearch logger (not sure if yours or the official library), and an unbelievably large debug message which seems to be the full payload being inserted.

@peterbe
Copy link

peterbe commented Feb 2, 2018

Stock Django 1.11 with no fancy logging specifically for elasticsearch_dsl. I appreciate that sometimes Elasticsearch barfs but there should still be an error sent to Sentry. I do use this little snippet for calling elasticsearch_dsl:

from elasticsearch.exceptions import (
    ConnectionTimeout,
    ConnectionError,
    TransportError,
    NotFoundError,
)

_es_operational_exceptions = (
    ConnectionTimeout,
    ConnectionError,
    TransportError,
)


def es_retry(callable, *args, **kwargs):
    sleep_time = kwargs.pop('_sleep_time', 1)
    attempts = kwargs.pop('_attempts', 10)
    verbose = kwargs.pop('_verbose', False)
    try:
        return callable(*args, **kwargs)
    except _es_operational_exceptions as exception:
        if attempts:
            attempts -= 1
            if verbose:
                print("ES Retrying ({} {}) {}".format(
                    attempts,
                    sleep_time,
                    exception
                ))
            time.sleep(sleep_time)
        else:
            raise

I don't see those ES Retrying prints in the uwsgi logs.

@dcramer
Copy link
Member

dcramer commented Feb 2, 2018

@peterbe sorry I should have been clear that i was intending to fix it, but the behavior (of using debug messages to log e.g. raw packets) is poor in general. ES certainly isnt alone in that.

@danieledapo
Copy link

any chance of deploying a new version of raven which includes the fix? This issue has been a real pain for a long time for us and we really want to try this fix.

@ashwoods
Copy link
Contributor

ashwoods commented Feb 9, 2018

@d-dorazio will be out by tomorrow.

@ryechus
Copy link

ryechus commented May 5, 2020

I am still experiencing this issue even after updating to version 6.10.0.

@davidpaley
Copy link

I am having this error working with next.js and "@sentry/react": version 5.20.1 with the default configuration of React

@Empact
Copy link

Empact commented Nov 6, 2020

For reference, @davidpaley apparently solved his problem here: getsentry/sentry-javascript#2798

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.