Skip to content

fix: cap uncompressed WebSocket frame size to prevent peer-induced OOM#63257

Open
mohammadmseet-hue wants to merge 1 commit intodart-lang:mainfrom
mohammadmseet-hue:fix-ws-uncompressed-frame-size-cap
Open

fix: cap uncompressed WebSocket frame size to prevent peer-induced OOM#63257
mohammadmseet-hue wants to merge 1 commit intodart-lang:mainfrom
mohammadmseet-hue:fix-ws-uncompressed-frame-size-cap

Conversation

@mohammadmseet-hue
Copy link
Copy Markdown

Summary

_WebSocketProtocolTransformer accepted arbitrary 64-bit frame lengths (RFC 6455 allows up to 2⁶³ bytes per frame). The receive loop allocated payload bytes into _payload until either the declared length was reached or the local heap was exhausted. There was no cap.

A malicious peer can advertise a 200 MB BINARY frame, then stream the payload in 64 KB chunks. The Dart WebSocket client/server accumulates each chunk into _payload until the process is killed.

Live PoC against dart:stable

Dart client running in a 256 MiB cgroup:

$ python3 ws_uncompressed_oom_server.py &
[server] handshake done; sending BINARY FIN=1 with declared len=209715200
[server] sent 0 KB so far
[server] sent 12800 KB so far
[server] sent 25600 KB so far
...
[server] sent 153600 KB so far
[server] connection closed at 158336 KB: Connection reset by peer

The Dart container was killed by the kernel OOM-killer at ~155 MB received, before it could log a single response. No authentication required; the client simply connected to a hostile WS server.

Why this is a separate bug from #63255

#63255 caps the inflated size out of the per-message-deflate filter — but only when compression is negotiated. This PR caps the on-wire frame size at the parser entry, which protects:

  • The uncompressed receive path (no permessage-deflate negotiated)
  • The compressed receive path BEFORE inflation begins (defense in depth)

Both fixes use the same 16 MiB ceiling so the two limits are consistent.

Fix

Add _maxFrameLength = 16 MiB constant. Reject any data frame whose declared length exceeds it inside _lengthDone() — runs after parsing the 64-bit length header, before any payload allocation.

if (!_isControlFrame() && _len > _maxFrameLength) {
  throw WebSocketException(
    "Frame payload length $_len exceeds maximum $_maxFrameLength. "
    "Refusing to allocate to avoid heap exhaustion.",
  );
}

+27 / -0 lines.

Why 16 MiB

  • Generous vs typical WS message sizes (browser runtimes cap at a similar order — Chrome 4 MiB default, Firefox 16 MiB).
  • Matches the existing decompression cap from fix: cap inflated permessage-deflate frame size to prevent DoS #63255 so both limits are consistent.
  • Peers needing larger messages can fragment across multiple frames; the per-frame cap is independent from cumulative message size (bounded only by application logic).

Test plan

  • PoC ws_uncompressed_oom_server.py reproduces OOM kill on unpatched dart:stable.
  • After the fix: parser throws WebSocketException("Frame payload length 209715200 exceeds maximum 16777216. ...") immediately after parsing the 64-bit length header, before allocating a single payload byte.

Adding a regression test in tests/standalone/io/web_socket_protocol_test.dart in a follow-up commit if reviewer prefers.

`_WebSocketProtocolTransformer` accepted arbitrary 64-bit frame lengths
(RFC 6455 allows up to 2^63 bytes per frame). The receive loop
allocated payload bytes into `_payload` until either the declared
length was reached or the local heap was exhausted. There was no cap.

A malicious peer can advertise a 200 MB BINARY frame, then stream the
payload in 64 KB chunks. The Dart WebSocket client / server
accumulates each chunk into `_payload` until the process is killed.

Live PoC against `dart:stable` client running in a 256 MiB cgroup:

  $ python3 ws_uncompressed_oom_server.py &
  [server] handshake done; sending BINARY FIN=1 with declared len=209715200
  [server] sent 0 KB so far
  ... server streams payload in 64 KB chunks ...
  [server] sent 153600 KB so far
  [server] connection closed at 158336 KB: Connection reset by peer

The Dart container was killed by the kernel OOM-killer at ~155 MB
received, before it could log a single response. No authentication
was required; the client simply connected to a hostile WS server.

This is the uncompressed-frame analogue of the per-message-deflate
decompression bomb fixed in PR dart-lang#63255 — that one capped the inflated
size out of the deflate filter; this one caps the on-wire frame size
at the parser entry, which protects both compressed and uncompressed
receive paths.

Fix
---
Add `_maxFrameLength = 16 MiB` and reject any data frame whose
declared length exceeds it. The check runs in `_lengthDone()` before
any payload allocation, so an oversized frame costs nothing.

- Control frames are unaffected (already capped at 125 bytes upstream).
- 16 MiB is generous vs typical WS message sizes; matches our existing
  decompression cap so the two limits are consistent.
- Senders that need larger messages can fragment across multiple
  frames (the per-frame cap is independent from the cumulative
  message size, which is bounded only by application logic).

Test plan
---------
The PoC `ws_uncompressed_oom_server.py` reproduces the OOM kill on
unpatched `dart:stable`. After the fix: the parser throws
`WebSocketException("Frame payload length 209715200 exceeds maximum
16777216. ...")` immediately after parsing the 64-bit length header,
before allocating a single payload byte. The connection's existing
error path closes the socket cleanly.

Adding a regression test in `tests/standalone/io/web_socket_protocol_test.dart`
in a follow-up commit if reviewer prefers.
@copybara-service
Copy link
Copy Markdown

Thank you for your contribution! This project uses Gerrit for code reviews. Your pull request has automatically been converted into a code review at:

https://dart-review.googlesource.com/c/sdk/+/498460

Please wait for a developer to review your code review at the above link; you can speed up the review if you sign into Gerrit and manually add a reviewer that has recently worked on the relevant code. See CONTRIBUTING.md to learn how to upload changes to Gerrit directly.

Additional commits pushed to this PR will update both the PR and the corresponding Gerrit CL. After the review is complete on the CL, your reviewer will merge the CL (automatically closing this PR).

@copybara-service
Copy link
Copy Markdown

Gerrit CL has build or test failures, please review them in Gerrit and fix them before requesting another review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant