-
Notifications
You must be signed in to change notification settings - Fork 3.8k
fix(http): prevent CPU spin on SSL connection pool idle sockets #25449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix(http): prevent CPU spin on SSL connection pool idle sockets #25449
Conversation
When SSL connections are released to the keep-alive pool, they remain registered for READABLE events. This can cause 100% CPU usage when: 1. SSL has buffered data in internal OpenSSL buffers 2. Server sends unexpected data on the pooled connection This fix: - Pauses readable events on sockets when released to the pool - Resumes readable events when socket is reused from the pool - Closes pooled sockets that receive unexpected data instead of ignoring Fixes oven-sh#25430 Co-Authored-By: Claude <[email protected]>
WalkthroughPause pooled SSL socket streams when returning to the keep-alive pool and resume them before reuse; adjust pooled-socket onData handling. Add regression tests that spawn Bun processes to verify CPU remains low after HTTPS fetches. Changes
Possibly related PRs
Suggested reviewers
Pre-merge checks✅ Passed checks (4 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
src/http/HTTPContext.zig(3 hunks)test/regression/issue/025430.test.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
test/**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
test/**/*.{js,ts,jsx,tsx}: Write tests as JavaScript and TypeScript files using Jest-style APIs (test,describe,expect) and import frombun:test
Usetest.eachand data-driven tests to reduce boilerplate when testing multiple similar cases
Files:
test/regression/issue/025430.test.ts
test/**/*.test.{ts,js,jsx,tsx,mjs,cjs}
📄 CodeRabbit inference engine (test/CLAUDE.md)
test/**/*.test.{ts,js,jsx,tsx,mjs,cjs}: Usebun:testwith files that end in*.test.{ts,js,jsx,tsx,mjs,cjs}
Do not write flaky tests. Never wait for time to pass in tests; always wait for the condition to be met instead of using an arbitrary amount of time
Never use hardcoded port numbers in tests. Always useport: 0to get a random port
Prefer concurrent tests over sequential tests usingtest.concurrentordescribe.concurrentwhen multiple tests spawn processes or write files, unless it's very difficult to make them concurrent
When spawning Bun processes in tests, usebunExeandbunEnvfromharnessto ensure the same build of Bun is used and debug logging is silenced
Use-eflag for single-file tests when spawning Bun processes
UsetempDir()from harness to create temporary directories with files for multi-file tests instead of creating files manually
Prefer async/await over callbacks in tests
When callbacks must be used and it's just a single callback, usePromise.withResolversto create a promise that can be resolved or rejected from a callback
Do not set a timeout on tests. Bun already has timeouts
UseBuffer.alloc(count, fill).toString()instead of'A'.repeat(count)to create repetitive strings in tests, as ''.repeat is very slow in debug JavaScriptCore builds
Usedescribeblocks for grouping related tests
Always useawait usingorusingto ensure proper resource cleanup in tests for APIs like Bun.listen, Bun.connect, Bun.spawn, Bun.serve, etc
Always check exit codes and test error scenarios in error tests
Usedescribe.each()for parameterized tests
UsetoMatchSnapshot()for snapshot testing
UsebeforeAll(),afterEach(),beforeEach()for setup/teardown in tests
Track resources (servers, clients) in arrays for cleanup inafterEach()
Files:
test/regression/issue/025430.test.ts
test/regression/issue/**/*.test.ts
📄 CodeRabbit inference engine (test/CLAUDE.md)
Regression tests for specific issues go in
/test/regression/issue/${issueNumber}.test.ts. Do not put tests without issue numbers in the regression directory
Files:
test/regression/issue/025430.test.ts
test/**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
test/**/*.test.{ts,tsx}: For single-file tests in Bun test suite, prefer using-eflag overtempDir
For multi-file tests in Bun test suite, prefer usingtempDirandBun.spawn
Always useport: 0when spawning servers in tests - do not hardcode ports or use custom random port functions
UsenormalizeBunSnapshotto normalize snapshot output in tests instead of manual output comparison
Never write tests that check for no 'panic', 'uncaught exception', or similar strings in test output - that is not a valid test
UsetempDirfromharnessto create temporary directories in tests - do not usetmpdirSyncorfs.mkdtempSync
In tests, callexpect(stdout).toBe(...)beforeexpect(exitCode).toBe(0)when spawning processes for more useful error messages on failure
Do not write flaky tests - do not usesetTimeoutin tests; insteadawaitthe condition to be met since you're testing the CONDITION, not TIME PASSING
Verify your test fails withUSE_SYSTEM_BUN=1 bun test <file>and passes withbun bd test <file>- tests are not valid if they pass withUSE_SYSTEM_BUN=1
Avoid shell commands in tests - do not usefindorgrep; use Bun's Glob and built-in tools instead
Test files must end in.test.tsor.test.tsxand be created in the appropriate test folder structure
Files:
test/regression/issue/025430.test.ts
src/**/*.{cpp,zig}
📄 CodeRabbit inference engine (.cursor/rules/building-bun.mdc)
src/**/*.{cpp,zig}: Usebun bdorbun run build:debugto build debug versions for C++ and Zig source files; creates debug build at./build/debug/bun-debug
Run tests usingbun bd test <test-file>with the debug build; never usebun testdirectly as it will not include your changes
Execute files usingbun bd <file> <...args>; never usebun <file>directly as it will not include your changes
Enable debug logs for specific scopes usingBUN_DEBUG_$(SCOPE)=1environment variable
Code generation happens automatically as part of the build process; no manual code generation commands are required
Files:
src/http/HTTPContext.zig
src/**/*.zig
📄 CodeRabbit inference engine (.cursor/rules/building-bun.mdc)
Use
bun.Output.scoped(.${SCOPE}, .hidden)for creating debug logs in Zig codeImplement core functionality in Zig, typically in its own directory in
src/
src/**/*.zig: Private fields in Zig are fully supported using the#prefix:struct { #foo: u32 };
Use decl literals in Zig for declaration initialization:const decl: Decl = .{ .binding = 0, .value = 0 };
Prefer@importat the bottom of the file (auto formatter will move them automatically)Be careful with memory management in Zig code - use defer for cleanup with allocators
Files:
src/http/HTTPContext.zig
**/*.zig
📄 CodeRabbit inference engine (.cursor/rules/zig-javascriptcore-classes.mdc)
**/*.zig: Expose generated bindings in Zig structs usingpub const js = JSC.Codegen.JS<ClassName>with trait conversion methods:toJS,fromJS, andfromJSDirect
Use consistent parameter nameglobalObjectinstead ofctxin Zig constructor and method implementations
Usebun.JSError!JSValuereturn type for Zig methods and constructors to enable proper error handling and exception propagation
Implement resource cleanup usingdeinit()method that releases resources, followed byfinalize()called by the GC that invokesdeinit()and frees the pointer
UseJSC.markBinding(@src())in finalize methods for debugging purposes before callingdeinit()
For methods returning cached properties in Zig, declare external C++ functions usingextern fnandcallconv(JSC.conv)calling convention
Implement getter functions with naming patternget<PropertyName>in Zig that acceptthisandglobalObjectparameters and returnJSC.JSValue
Access JavaScript CallFrame arguments usingcallFrame.argument(i), check argument count withcallFrame.argumentCount(), and getthiswithcallFrame.thisValue()
For reference-counted objects, use.deref()in finalize instead ofdestroy()to release references to other JS objects
Files:
src/http/HTTPContext.zig
🧠 Learnings (16)
📚 Learning: 2025-11-24T18:36:59.706Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: src/bun.js/bindings/v8/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:36:59.706Z
Learning: Applies to src/bun.js/bindings/v8/test/v8/v8.test.ts : Add corresponding test cases to test/v8/v8.test.ts using checkSameOutput() function to compare Node.js and Bun output
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-10-18T05:23:24.403Z
Learnt from: theshadow27
Repo: oven-sh/bun PR: 23798
File: test/js/bun/telemetry-server.test.ts:91-100
Timestamp: 2025-10-18T05:23:24.403Z
Learning: In the Bun codebase, telemetry tests (test/js/bun/telemetry-*.test.ts) should focus on telemetry API behavior: configure/disable/isEnabled, callback signatures and invocation, request ID correlation, and error handling. HTTP protocol behaviors like status code normalization (e.g., 200 with empty body → 204) should be tested in HTTP server tests (test/js/bun/http/), not in telemetry tests. Keep separation of concerns: telemetry tests verify the telemetry API contract; HTTP tests verify HTTP semantics.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-10-19T04:55:33.099Z
Learnt from: theshadow27
Repo: oven-sh/bun PR: 23798
File: test/js/bun/http/node-telemetry.test.ts:27-203
Timestamp: 2025-10-19T04:55:33.099Z
Learning: In test/js/bun/http/node-telemetry.test.ts and the Bun.telemetry._node_binding API, after the architecture refactor, the _node_binding interface only contains two methods: handleIncomingRequest(req, res) and handleWriteHead(res, statusCode). The handleRequestFinish hook and other lifecycle hooks were removed during simplification. Both current methods are fully tested.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:35:08.612Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/dev-server-tests.mdc:0-0
Timestamp: 2025-11-24T18:35:08.612Z
Learning: Applies to test/bake/dev/bundle.test.ts : Organize bundle tests in bundle.test.ts for tests concerning bundling bugs that only occur in DevServer
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:35:50.422Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/writing-tests.mdc:0-0
Timestamp: 2025-11-24T18:35:50.422Z
Learning: See `test/harness.ts` for common test utilities and helpers
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:35:08.612Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/dev-server-tests.mdc:0-0
Timestamp: 2025-11-24T18:35:08.612Z
Learning: Applies to test/bake/**/*.test.ts : Use `dev.fetch()` for running HTTP requests in tests
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*.test.{ts,js,jsx,tsx,mjs,cjs} : When spawning Bun processes in tests, use `bunExe` and `bunEnv` from `harness` to ensure the same build of Bun is used and debug logging is silenced
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:35:08.612Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/dev-server-tests.mdc:0-0
Timestamp: 2025-11-24T18:35:08.612Z
Learning: Applies to test/bake/dev/html.test.ts : Organize HTML tests in html.test.ts for tests relating to HTML files themselves
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*.test.{ts,js,jsx,tsx,mjs,cjs} : Track resources (servers, clients) in arrays for cleanup in `afterEach()`
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-12-02T05:59:51.485Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-02T05:59:51.485Z
Learning: Applies to test/**/*.test.{ts,tsx} : In tests, call `expect(stdout).toBe(...)` before `expect(exitCode).toBe(0)` when spawning processes for more useful error messages on failure
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-06T00:58:23.965Z
Learnt from: markovejnovic
Repo: oven-sh/bun PR: 24417
File: test/js/bun/spawn/spawn.test.ts:903-918
Timestamp: 2025-11-06T00:58:23.965Z
Learning: In Bun test files, `await using` with spawn() is appropriate for long-running processes that need guaranteed cleanup on scope exit or when explicitly testing disposal behavior. For short-lived processes that exit naturally (e.g., console.log scripts), the pattern `const proc = spawn(...); await proc.exited;` is standard and more common, as evidenced by 24 instances vs 4 `await using` instances in test/js/bun/spawn/spawn.test.ts.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*.test.{ts,js,jsx,tsx,mjs,cjs} : Always use `await using` or `using` to ensure proper resource cleanup in tests for APIs like Bun.listen, Bun.connect, Bun.spawn, Bun.serve, etc
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-08T04:06:33.198Z
Learnt from: Jarred-Sumner
Repo: oven-sh/bun PR: 24491
File: test/js/bun/transpiler/declare-global.test.ts:17-17
Timestamp: 2025-11-08T04:06:33.198Z
Learning: In Bun test files, `await using` with Bun.spawn() is the preferred pattern for spawned processes regardless of whether they are short-lived or long-running. Do not suggest replacing `await using proc = Bun.spawn(...)` with `const proc = Bun.spawn(...); await proc.exited;`.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:35:50.422Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/writing-tests.mdc:0-0
Timestamp: 2025-11-24T18:35:50.422Z
Learning: Applies to test/cli/**/*.{js,ts,jsx,tsx} : When testing Bun as a CLI, use the `spawn` API from `bun` with the `bunExe()` and `bunEnv` from `harness` to execute Bun commands and validate exit codes, stdout, and stderr
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-09-02T05:33:37.517Z
Learnt from: Jarred-Sumner
Repo: oven-sh/bun PR: 22323
File: test/js/web/websocket/websocket-subprotocol.test.ts:74-75
Timestamp: 2025-09-02T05:33:37.517Z
Learning: In Bun's runtime, `await using` with Node.js APIs like `net.createServer()` is properly supported and should not be replaced with explicit cleanup. Bun has extended Node.js APIs with proper async dispose support.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-10-17T20:50:58.644Z
Learnt from: taylordotfish
Repo: oven-sh/bun PR: 23755
File: src/bun.js/api/bun/socket/Handlers.zig:154-159
Timestamp: 2025-10-17T20:50:58.644Z
Learning: In Bun socket configuration error messages (src/bun.js/api/bun/socket/Handlers.zig), use the user-facing JavaScript names "data" and "drain" instead of internal field names "onData" and "onWritable", as these are the names users see in the API according to SocketConfig.bindv2.ts.
Applied to files:
src/http/HTTPContext.zig
🧬 Code graph analysis (1)
test/regression/issue/025430.test.ts (2)
test/harness.ts (1)
bunExe(102-105)packages/bun-inspector-protocol/src/protocol/jsc/index.d.ts (1)
Response(2793-2806)
🔇 Additional comments (6)
src/http/HTTPContext.zig (3)
331-334: Improved error messaging.The updated comment and log message more clearly explain why unexpected data on a pooled socket is problematic and warrants termination.
433-435: Correct resumption of socket events before reuse.The symmetric resume operation properly re-enables readable events when the socket is taken from the pool. The placement after validation checks (closed/shutdown/error) is correct.
192-195: Excellent fix for the CPU spin issue.The pause/resume pattern correctly addresses the root cause. By pausing readable events on pooled sockets, you prevent the event loop from continuously invoking the handler when SSL has buffered data or when unexpected data arrives.
Verify that the pause/resume operations handle edge cases correctly:
- Confirm pauseStream and resumeStream return types and whether discarding their return values is safe in this context
- Check if other usages of these methods handle potential errors from pause/resume operations
- Verify socket state consistency when pause/resume operations are called in rapid succession
test/regression/issue/025430.test.ts (3)
19-22: Test scenario appropriately triggers the bug.The parallel HTTPS fetches followed by a settling period correctly reproduce the conditions that trigger the CPU spin issue. The 100ms wait allows connections to enter the pool where they would previously cause busy-polling.
Also applies to: 39-40
42-59: CPU measurement implementation is correct.The CPU percentage calculation properly measures user + system time over the elapsed period. The 20% threshold is reasonable for detecting the ~100% CPU bug, though it might be overly strict for noisy test environments.
Consider making the threshold configurable via an environment variable for CI environments that might have higher baseline CPU noise:
const CPU_THRESHOLD = parseFloat(process.env.BUN_TEST_CPU_THRESHOLD ?? "20"); if (cpuPercent >= CPU_THRESHOLD) { console.error(\`CPU usage too high: \${cpuPercent.toFixed(2)}%\`); process.exit(1); }⛔ Skipped due to learnings
Learnt from: Jarred-Sumner Repo: oven-sh/bun PR: 24491 File: test/js/bun/transpiler/declare-global.test.ts:17-17 Timestamp: 2025-11-08T04:06:33.198Z Learning: In Bun test files, `await using` with Bun.spawn() is the preferred pattern for spawned processes regardless of whether they are short-lived or long-running. Do not suggest replacing `await using proc = Bun.spawn(...)` with `const proc = Bun.spawn(...); await proc.exited;`.Learnt from: Jarred-Sumner Repo: oven-sh/bun PR: 22323 File: test/js/web/websocket/websocket-subprotocol.test.ts:74-75 Timestamp: 2025-09-02T05:33:37.517Z Learning: In Bun's runtime, `await using` with Node.js APIs like `net.createServer()` is properly supported and should not be replaced with explicit cleanup. Bun has extended Node.js APIs with proper async dispose support.Learnt from: markovejnovic Repo: oven-sh/bun PR: 24417 File: test/js/bun/spawn/spawn.test.ts:903-918 Timestamp: 2025-11-06T00:58:23.965Z Learning: In Bun test files, `await using` with spawn() is appropriate for long-running processes that need guaranteed cleanup on scope exit or when explicitly testing disposal behavior. For short-lived processes that exit naturally (e.g., console.log scripts), the pattern `const proc = spawn(...); await proc.exited;` is standard and more common, as evidenced by 24 instances vs 4 `await using` instances in test/js/bun/spawn/spawn.test.ts.Learnt from: Jarred-Sumner Repo: oven-sh/bun PR: 22279 File: test/js/web/structured-clone-fastpath.test.ts:12-12 Timestamp: 2025-08-31T09:08:12.104Z Learning: In Bun, process.memoryUsage.rss() is called as a direct function, not as process.memoryUsage().rss like in Node.js. This is the correct API shape for Bun.
7-83: Test structure and guidelines compliance are correct, but network dependency and CPU threshold warrant manual validation.The test properly uses
bunExe(),bunEnv,await using, and checks exit codes correctly per coding guidelines. However, two reliability factors deserve consideration:
External network dependency: The test depends on
https://example.combeing accessible. Network issues or DNS failures could cause false failures in CI.CPU threshold sensitivity: The 20% threshold is intentionally conservative compared to ~100% before the fix, but may still be sensitive to highly loaded machines or CI resource constraints. Verify this threshold is appropriate in your testing environment.
The fixed timing (100ms settle, 500ms measure) is necessary for CPU measurement and not inherently problematic.
| test("Multiple sequential requests should not accumulate CPU usage", async () => { | ||
| await using proc = Bun.spawn({ | ||
| cmd: [ | ||
| bunExe(), | ||
| "-e", | ||
| ` | ||
| const http = require("node:http"); | ||
|
|
||
| const server = http.createServer(async (req, res) => { | ||
| await fetch("https://example.com"); | ||
| res.writeHead(200); | ||
| res.end("ok"); | ||
| }); | ||
|
|
||
| server.listen(0, async () => { | ||
| const port = server.address().port; | ||
|
|
||
| // Make multiple sequential requests | ||
| for (let i = 0; i < 3; i++) { | ||
| const response = await fetch("http://localhost:" + port); | ||
| await response.text(); | ||
| } | ||
|
|
||
| // Wait for connections to settle | ||
| await Bun.sleep(100); | ||
|
|
||
| // Measure CPU usage | ||
| const startUsage = process.cpuUsage(); | ||
| await Bun.sleep(500); | ||
| const endUsage = process.cpuUsage(startUsage); | ||
|
|
||
| const totalCpuTime = endUsage.user + endUsage.system; | ||
| const elapsedMicros = 500 * 1000; | ||
| const cpuPercent = (totalCpuTime / elapsedMicros) * 100; | ||
|
|
||
| server.close(); | ||
|
|
||
| if (cpuPercent >= 20) { | ||
| console.error("CPU usage too high after sequential requests: " + cpuPercent.toFixed(2) + "%"); | ||
| process.exit(1); | ||
| } | ||
|
|
||
| console.log("CPU usage after sequential requests: " + cpuPercent.toFixed(2) + "% (OK)"); | ||
| process.exit(0); | ||
| }); | ||
| `, | ||
| ], | ||
| env: bunEnv, | ||
| stdout: "pipe", | ||
| stderr: "pipe", | ||
| }); | ||
|
|
||
| const [stdout, stderr, exitCode] = await Promise.all([ | ||
| new Response(proc.stdout).text(), | ||
| new Response(proc.stderr).text(), | ||
| proc.exited, | ||
| ]); | ||
|
|
||
| if (exitCode !== 0) { | ||
| console.log("stdout:", stdout); | ||
| console.log("stderr:", stderr); | ||
| } | ||
|
|
||
| expect(exitCode).toBe(0); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is flaky and violates Bun testing guidelines.
The test measures CPU usage during an arbitrary 500ms sleep period, which inherently makes it unreliable across different systems and loads. Per the coding guidelines: "Never wait for time to pass in tests; always wait for the condition to be met instead of using an arbitrary amount of time." This test uses arbitrary time windows (100ms wait + 500ms measurement) instead of waiting for an actual condition.
Consider refactoring to:
- Test concrete behavior (e.g., connection cleanup, memory release) rather than CPU metrics
- Wait for actual conditions (e.g., resource state changes) instead of arbitrary sleep periods
- If CPU measurement is essential, use a much larger measurement window and conservative threshold to reduce flakiness, or skip this test in CI environments where system load is unpredictable
Additionally, log stdout before the expect(exitCode) check for better debugging output on failure.
🤖 Prompt for AI Agents
In test/regression/issue/025430.test.ts around lines 85 to 149, the test is
flaky because it measures CPU over arbitrary sleep windows (100ms + 500ms) and
therefore violates the guideline to avoid waiting fixed time in tests; refactor
the test to remove time-based CPU measurement: instead assert a concrete
determinable behavior (e.g., verify connections are closed, sockets count
decreases, or memory/resources are released) by polling/waiting for that
condition with a timeout, or skip CPU checks in CI; if CPU measurement is
absolutely required, replace the short sleep with a much larger conservative
measurement window and a more lenient threshold and guard it behind an
environment flag or skip in CI; also ensure stdout/stderr are logged before the
expect(exitCode) assertion to aid debugging on failures.
|
I like this idea Will we still receive EOF? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
test/regression/issue/025430.test.ts (2)
39-59: CPU-usage assertions with fixed sleeps make this regression test flaky and violate test guidelinesBoth tests gate success on CPU usage measured over
Bun.sleep(100)+Bun.sleep(500)windows with a hard< 20%threshold. This is inherently sensitive to machine load, virtualization, and host differences, and it directly contradicts the Bun testing rule: “Do not write flaky tests. Never wait for time to pass in tests; always wait for the condition to be met instead of using an arbitrary amount of time.”Please refactor these to assert a deterministic condition instead of short time-based CPU sampling (e.g., assert specific socket/connection behavior, or a more structural property of the HTTP/SSL pool) or otherwise redesign the regression so it doesn’t depend on host CPU noise. If a CPU-based check is absolutely unavoidable, it should at least be guarded (e.g., skipped in CI) and use a far more conservative window/threshold, but that should be a last resort. Based on learnings, this pattern is not acceptable for normal Bun tests.
Also applies to: 108-125
71-83: Consider asserting on stdout before exit code for clearer failuresRight now you only assert
expect(exitCode).toBe(0)and print stdout/stderr conditionally on failure. Per the Bun test guidelines, it’s often more helpful to assert something about stdout first (e.g.,expect(stdout).toContain("(OK)")) and then assert the exit code, so failures immediately show a more descriptive message and ensure the fixture actually executed the expected path.Not strictly required for correctness, but it would make this regression easier to debug when it fails.
Also applies to: 137-149
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
test/regression/issue/025430.test.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
test/**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
test/**/*.{js,ts,jsx,tsx}: Write tests as JavaScript and TypeScript files using Jest-style APIs (test,describe,expect) and import frombun:test
Usetest.eachand data-driven tests to reduce boilerplate when testing multiple similar cases
Files:
test/regression/issue/025430.test.ts
test/**/*.test.{ts,js,jsx,tsx,mjs,cjs}
📄 CodeRabbit inference engine (test/CLAUDE.md)
test/**/*.test.{ts,js,jsx,tsx,mjs,cjs}: Usebun:testwith files that end in*.test.{ts,js,jsx,tsx,mjs,cjs}
Do not write flaky tests. Never wait for time to pass in tests; always wait for the condition to be met instead of using an arbitrary amount of time
Never use hardcoded port numbers in tests. Always useport: 0to get a random port
Prefer concurrent tests over sequential tests usingtest.concurrentordescribe.concurrentwhen multiple tests spawn processes or write files, unless it's very difficult to make them concurrent
When spawning Bun processes in tests, usebunExeandbunEnvfromharnessto ensure the same build of Bun is used and debug logging is silenced
Use-eflag for single-file tests when spawning Bun processes
UsetempDir()from harness to create temporary directories with files for multi-file tests instead of creating files manually
Prefer async/await over callbacks in tests
When callbacks must be used and it's just a single callback, usePromise.withResolversto create a promise that can be resolved or rejected from a callback
Do not set a timeout on tests. Bun already has timeouts
UseBuffer.alloc(count, fill).toString()instead of'A'.repeat(count)to create repetitive strings in tests, as ''.repeat is very slow in debug JavaScriptCore builds
Usedescribeblocks for grouping related tests
Always useawait usingorusingto ensure proper resource cleanup in tests for APIs like Bun.listen, Bun.connect, Bun.spawn, Bun.serve, etc
Always check exit codes and test error scenarios in error tests
Usedescribe.each()for parameterized tests
UsetoMatchSnapshot()for snapshot testing
UsebeforeAll(),afterEach(),beforeEach()for setup/teardown in tests
Track resources (servers, clients) in arrays for cleanup inafterEach()
Files:
test/regression/issue/025430.test.ts
test/regression/issue/**/*.test.ts
📄 CodeRabbit inference engine (test/CLAUDE.md)
Regression tests for specific issues go in
/test/regression/issue/${issueNumber}.test.ts. Do not put tests without issue numbers in the regression directory
Files:
test/regression/issue/025430.test.ts
test/**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
test/**/*.test.{ts,tsx}: For single-file tests in Bun test suite, prefer using-eflag overtempDir
For multi-file tests in Bun test suite, prefer usingtempDirandBun.spawn
Always useport: 0when spawning servers in tests - do not hardcode ports or use custom random port functions
UsenormalizeBunSnapshotto normalize snapshot output in tests instead of manual output comparison
Never write tests that check for no 'panic', 'uncaught exception', or similar strings in test output - that is not a valid test
UsetempDirfromharnessto create temporary directories in tests - do not usetmpdirSyncorfs.mkdtempSync
In tests, callexpect(stdout).toBe(...)beforeexpect(exitCode).toBe(0)when spawning processes for more useful error messages on failure
Do not write flaky tests - do not usesetTimeoutin tests; insteadawaitthe condition to be met since you're testing the CONDITION, not TIME PASSING
Verify your test fails withUSE_SYSTEM_BUN=1 bun test <file>and passes withbun bd test <file>- tests are not valid if they pass withUSE_SYSTEM_BUN=1
Avoid shell commands in tests - do not usefindorgrep; use Bun's Glob and built-in tools instead
Test files must end in.test.tsor.test.tsxand be created in the appropriate test folder structure
Files:
test/regression/issue/025430.test.ts
🧠 Learnings (23)
📓 Common learnings
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/registering-bun-modules.mdc:0-0
Timestamp: 2025-11-24T18:35:39.205Z
Learning: Manage socket lifetimes correctly for network operations in Bun modules
Learnt from: theshadow27
Repo: oven-sh/bun PR: 23798
File: packages/bun-otel/BunFetchInstrumentation.ts:126-131
Timestamp: 2025-10-20T01:38:02.660Z
Learning: In BunFetchInstrumentation.ts, the force-restore to ORIGINAL_FETCH in the disable() method is intentionally kept (despite appearing unsafe) because it's required for proper test cleanup when instrumentation is repeatedly enabled/disabled. Without it, 15 distributed tracing and context propagation tests fail. Shimmer's unwrap() doesn't reliably restore the original fetch in Bun's globalThis context. The isBunOtelPatched safety check ensures the restore only happens when the current fetch is still ours, preventing clobbering of other tools' wrappers.
📚 Learning: 2025-11-24T18:36:59.706Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: src/bun.js/bindings/v8/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:36:59.706Z
Learning: Applies to src/bun.js/bindings/v8/test/v8/v8.test.ts : Add corresponding test cases to test/v8/v8.test.ts using checkSameOutput() function to compare Node.js and Bun output
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*.test.{ts,js,jsx,tsx,mjs,cjs} : When spawning Bun processes in tests, use `bunExe` and `bunEnv` from `harness` to ensure the same build of Bun is used and debug logging is silenced
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-12-02T05:59:51.485Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-02T05:59:51.485Z
Learning: Applies to test/**/*.test.{ts,tsx} : For multi-file tests in Bun test suite, prefer using `tempDir` and `Bun.spawn`
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:35:08.612Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/dev-server-tests.mdc:0-0
Timestamp: 2025-11-24T18:35:08.612Z
Learning: Applies to test/bake/dev/bundle.test.ts : Organize bundle tests in bundle.test.ts for tests concerning bundling bugs that only occur in DevServer
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*.test.{ts,js,jsx,tsx,mjs,cjs} : Use `-e` flag for single-file tests when spawning Bun processes
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-10-19T04:55:33.099Z
Learnt from: theshadow27
Repo: oven-sh/bun PR: 23798
File: test/js/bun/http/node-telemetry.test.ts:27-203
Timestamp: 2025-10-19T04:55:33.099Z
Learning: In test/js/bun/http/node-telemetry.test.ts and the Bun.telemetry._node_binding API, after the architecture refactor, the _node_binding interface only contains two methods: handleIncomingRequest(req, res) and handleWriteHead(res, statusCode). The handleRequestFinish hook and other lifecycle hooks were removed during simplification. Both current methods are fully tested.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-10-18T05:23:24.403Z
Learnt from: theshadow27
Repo: oven-sh/bun PR: 23798
File: test/js/bun/telemetry-server.test.ts:91-100
Timestamp: 2025-10-18T05:23:24.403Z
Learning: In the Bun codebase, telemetry tests (test/js/bun/telemetry-*.test.ts) should focus on telemetry API behavior: configure/disable/isEnabled, callback signatures and invocation, request ID correlation, and error handling. HTTP protocol behaviors like status code normalization (e.g., 200 with empty body → 204) should be tested in HTTP server tests (test/js/bun/http/), not in telemetry tests. Keep separation of concerns: telemetry tests verify the telemetry API contract; HTTP tests verify HTTP semantics.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:35:50.422Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/writing-tests.mdc:0-0
Timestamp: 2025-11-24T18:35:50.422Z
Learning: See `test/harness.ts` for common test utilities and helpers
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*-fixture.ts : Test files that spawn Bun processes should end in `*-fixture.ts` to identify them as test fixtures and not tests themselves
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-12-02T05:59:51.485Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-02T05:59:51.485Z
Learning: Applies to test/**/*.test.{ts,tsx} : In tests, call `expect(stdout).toBe(...)` before `expect(exitCode).toBe(0)` when spawning processes for more useful error messages on failure
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*.test.{ts,js,jsx,tsx,mjs,cjs} : Do not write flaky tests. Never wait for time to pass in tests; always wait for the condition to be met instead of using an arbitrary amount of time
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-12-02T05:59:51.485Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-02T05:59:51.485Z
Learning: Applies to test/**/*.test.{ts,tsx} : Do not write flaky tests - do not use `setTimeout` in tests; instead `await` the condition to be met since you're testing the CONDITION, not TIME PASSING
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*.test.{ts,js,jsx,tsx,mjs,cjs} : Always use `await using` or `using` to ensure proper resource cleanup in tests for APIs like Bun.listen, Bun.connect, Bun.spawn, Bun.serve, etc
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*.test.{ts,js,jsx,tsx,mjs,cjs} : Never use hardcoded port numbers in tests. Always use `port: 0` to get a random port
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-06T00:58:23.965Z
Learnt from: markovejnovic
Repo: oven-sh/bun PR: 24417
File: test/js/bun/spawn/spawn.test.ts:903-918
Timestamp: 2025-11-06T00:58:23.965Z
Learning: In Bun test files, `await using` with spawn() is appropriate for long-running processes that need guaranteed cleanup on scope exit or when explicitly testing disposal behavior. For short-lived processes that exit naturally (e.g., console.log scripts), the pattern `const proc = spawn(...); await proc.exited;` is standard and more common, as evidenced by 24 instances vs 4 `await using` instances in test/js/bun/spawn/spawn.test.ts.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-09-03T01:30:58.001Z
Learnt from: markovejnovic
Repo: oven-sh/bun PR: 21728
File: test/js/valkey/valkey.test.ts:264-271
Timestamp: 2025-09-03T01:30:58.001Z
Learning: For test/js/valkey/valkey.test.ts PUB/SUB tests, avoid arbitrary sleeps and async-forEach. Instead, resolve a Promise from the subscriber callback when the expected number of messages is observed and await it with a bounded timeout (e.g., withTimeout + Promise.withResolvers) to account for Redis server→subscriber propagation.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:37:30.259Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: test/CLAUDE.md:0-0
Timestamp: 2025-11-24T18:37:30.259Z
Learning: Applies to test/**/*.test.{ts,js,jsx,tsx,mjs,cjs} : Always check exit codes and test error scenarios in error tests
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-12-02T05:59:51.485Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-02T05:59:51.485Z
Learning: Applies to test/**/*.test.{ts,tsx} : Always use `port: 0` when spawning servers in tests - do not hardcode ports or use custom random port functions
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-08T04:06:33.198Z
Learnt from: Jarred-Sumner
Repo: oven-sh/bun PR: 24491
File: test/js/bun/transpiler/declare-global.test.ts:17-17
Timestamp: 2025-11-08T04:06:33.198Z
Learning: In Bun test files, `await using` with Bun.spawn() is the preferred pattern for spawned processes regardless of whether they are short-lived or long-running. Do not suggest replacing `await using proc = Bun.spawn(...)` with `const proc = Bun.spawn(...); await proc.exited;`.
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:35:50.422Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/writing-tests.mdc:0-0
Timestamp: 2025-11-24T18:35:50.422Z
Learning: Applies to test/cli/**/*.{js,ts,jsx,tsx} : When testing Bun as a CLI, use the `spawn` API from `bun` with the `bunExe()` and `bunEnv` from `harness` to execute Bun commands and validate exit codes, stdout, and stderr
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-11-24T18:35:08.612Z
Learnt from: CR
Repo: oven-sh/bun PR: 0
File: .cursor/rules/dev-server-tests.mdc:0-0
Timestamp: 2025-11-24T18:35:08.612Z
Learning: Applies to test/bake/**/*.test.ts : Use `dev.fetch()` for running HTTP requests in tests
Applied to files:
test/regression/issue/025430.test.ts
📚 Learning: 2025-09-02T05:33:37.517Z
Learnt from: Jarred-Sumner
Repo: oven-sh/bun PR: 22323
File: test/js/web/websocket/websocket-subprotocol.test.ts:74-75
Timestamp: 2025-09-02T05:33:37.517Z
Learning: In Bun's runtime, `await using` with Node.js APIs like `net.createServer()` is properly supported and should not be replaced with explicit cleanup. Bun has extended Node.js APIs with proper async dispose support.
Applied to files:
test/regression/issue/025430.test.ts
🧬 Code graph analysis (1)
test/regression/issue/025430.test.ts (1)
test/harness.ts (1)
bunExe(102-105)
| await Promise.all([ | ||
| fetch("https://example.com"), | ||
| fetch("https://example.com"), | ||
| ]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
External HTTPS dependency (https://example.com) harms test reliability
Both inline fixtures perform real HTTPS fetches to https://example.com. That couples this regression test to external network availability, DNS, and third-party behavior, which can easily cause intermittent CI failures unrelated to Bun’s behavior.
Where possible, prefer a self-contained fixture (e.g., a local TLS server or Bun.serve-based HTTPS endpoint with a test certificate) so the test remains hermetic and only exercises Bun’s SSL connection pooling rather than the public internet.
Also applies to: 94-97
🤖 Prompt for AI Agents
In test/regression/issue/025430.test.ts around lines 19-22 (and similarly at
lines 94-97) the test performs real HTTPS requests to https://example.com which
makes the test flaky and dependent on external network; replace those external
fetch calls with a hermetic local HTTPS endpoint (for example spin up a
short-lived Bun.serve HTTPS server using a test/self-signed certificate or a
local TLS server) or stub/mock the fetch calls so the test exercises Bun's SSL
connection behavior without relying on the public internet; ensure the local
server is started and awaited before the fetches and torn down after the test so
the test is fully self-contained.
|
Good question! Looking at void us_socket_pause(int ssl, struct us_socket_t *s) {
// we are readable and writable so we can just pause readable side
us_poll_change(&s->p, s->context->loop, LIBUS_SOCKET_WRITABLE);
s->flags.is_paused = 1;
}No, EOF will not be received while paused - the socket only listens for This means if the server closes the connection while the socket is pooled:
However, in if (http_socket.isClosed()) {
markSocketAsDead(http_socket);
continue;
}
if (http_socket.isShutdown() or http_socket.getError() != 0) {
terminateSocket(http_socket);
continue;
}So stale connections will be detected and cleaned up when we try to reuse them. The only downside is we don't proactively clean them from the pool when they're closed remotely. Options to improve this:
What's your preference? I think option 1 is acceptable since we already have validation on reuse, and the 5-minute timeout provides a safety net. |
Summary
Fixes #25430 - 100% CPU usage after HTTPS fetch requests in
node:httpserver.Root Cause
When SSL connections are released to the keep-alive connection pool, they remain registered for READABLE events. This causes CPU spin when:
The event loop continuously triggers the
onDatahandler, which previously just ignored the data and returned, causing a busy-polling loop.Changes
releaseSocket)existingSocket)Test Plan
test/regression/issue/025430.test.tsbun test test/regression/issue/025430.test.tsbun test test/js/bun/http/serve.test.ts -t "keepalive"