Skip to content

fix(js): watchdog + propagates context to all JS library network calls#7299

Merged
dwisiswant0 merged 4 commits into
devfrom
6894-fix-js-pool-starvation
Apr 1, 2026
Merged

fix(js): watchdog + propagates context to all JS library network calls#7299
dwisiswant0 merged 4 commits into
devfrom
6894-fix-js-pool-starvation

Conversation

@Mzack9999
Copy link
Copy Markdown
Member

@Mzack9999 Mzack9999 commented Mar 24, 2026

Fixes #6894

Replaces #6896 - created a new branch since maintainer push wasn't enabled on the original PR.

Includes the pool slot starvation fix from @AuditeMarlow (watchdog goroutine + AddWithContext) plus context propagation to all JS library network calls so zombies actually get killed when the deadline fires.

Changes:

  • Watchdog goroutine releases pool slots on deadline expiry even if the worker is still stuck
  • AddWithContext replaces Add so slot acquisition respects the deadline too
  • Propagated the execution context to all JS protocol libraries (redis, smb, vnc, telnet, rdp, ldap, mysql, mssql, pop3, postgres, oracle, kerberos, smtp) replacing context.TODO() in network calls
  • Added NucleiJS.Context() helper to retrieve the deadline context from the goja runtime
  • Updated memogen template to exclude context.Context params from cache hash by type
  • Added unit tests for pool starvation and watchdog recovery

Summary by CodeRabbit

  • New Features

    • Execution paths and network/database probes now honor caller contexts and deadlines end-to-end.
    • Watchdog-based deadline handling added to promptly recover pooled resources when deadlines expire.
    • Runtime context retrieval available so embedded scripts can surface their execution context.
    • Memoization now omits the runtime context value from cache keys to reduce unnecessary cache churn.
  • Tests

    • Added tests covering deadline handling, watchdog recovery, and pool starvation/recovery.

AuditeMarlow and others added 2 commits March 24, 2026 23:17
Zombie goroutines from timed-out JS executions held pool slots
indefinitely: Add() blocked with context.Background(), and defer Done()
only ran when the goroutine eventually completed. Under load, both
pools (80 pooled + 20 non-pooled slots) filled with zombies, silently
dropping all subsequent matches.

Three changes fix slot lifecycle management:

1. Propagate the 20s deadline context into ExecuteProgram (compiler.go)
   so both execution paths can respect the deadline.

2. Replace Add() with AddWithContext(ctx) in both pool.go and
   non-pool.go so goroutines waiting for a slot fail fast when the
   deadline expires instead of blocking indefinitely.

3. Add a watchdog goroutine that releases the slot when the deadline
   expires, even if the zombie is still running. An atomic.Bool
   ensures exactly one Done() call between the watchdog and the
   normal defer path.
@neo-by-projectdiscovery-dev
Copy link
Copy Markdown

neo-by-projectdiscovery-dev Bot commented Mar 24, 2026

Neo - PR Security Review

No security issues found

Highlights

  • Implements watchdog goroutine pattern to release pool slots on deadline expiry, preventing resource exhaustion
  • Propagates execution context to all JS protocol libraries (redis, ldap, mysql, mssql, postgres, oracle, kerberos, smtp, telnet, rdp, vnc, smb, pop3) to enable proper cancellation
  • Uses atomic.Bool for slot release synchronization to prevent double-free race conditions
  • Updates memogen template to exclude context.Context from cache hash while preserving executionId-based cache isolation
Hardening Notes
  • Consider adding checked type assertions for ctx.Value("executionId").(string) with explicit error handling in protocol libraries (currently panics if context is malformed, though this should not occur during normal operation)
  • The NucleiJS.Context() fallback to context.TODO() at nucleijs.go:58-65 is defensive but could be logged at debug level to catch unexpected nil context scenarios during development
  • In pool.go:167-172 and non-pool.go:40-45, consider adding a defer recover() handler around the watchdog goroutine to log unexpected panics during slot release, improving observability

Comment @pdneo help for available commands. · Open in Neo

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 24, 2026

Walkthrough

Propagates execution contexts through JS compiler and many protocol libraries, makes pool slot acquisition deadline-aware with watchdog-based slot release, updates memoization template to ignore Context parameters and ensure trailing newline, adds pool starvation and NucleiJS Context tests, and introduces pgwrap OpenDB and context-aware dialing.

Changes

Cohort / File(s) Summary
Memoization Template
cmd/memogen/function.tpl
Exclude parameters typed exactly as &{context Context} from memoization key construction; ensure template ends with a newline.
JS Compiler Core & Pooling
pkg/js/compiler/compiler.go, pkg/js/compiler/non-pool.go, pkg/js/compiler/pool.go
Inject deadline-bearing ctx into opts, use AddWithContext(opts.Context) for slot acquisition, add watchdog goroutines plus done channel and atomic.Bool to guarantee a single Done() release and to free slots on context deadline; inject "ctx" into JS runtime context.
Pool Tests
pkg/js/compiler/pool_starvation_test.go, pkg/js/compiler/pool_test.go
New tests covering pool starvation, deadline-respecting acquisition, and watchdog-driven slot recovery and recovery scenarios.
NucleiJS Context API & Tests
pkg/js/utils/nucleijs.go, pkg/js/utils/nucleijs_test.go
Add NucleiJS.Context() to extract runtime ctx or fallback to context.Background(); tests validate fallback and runtime-backed behavior.
pgwrap / Postgres helper
pkg/js/utils/pgwrap/pgwrap.go, pkg/js/libs/postgres/...
Add OpenDB(ctx,executionId,dsn); make pg dialer and DB open/ping/query use provided contexts; propagate ctx through Postgres memoized helpers and internals.
Protocol libraries — broad context propagation
pkg/js/libs/.../memo.*.go, pkg/js/libs/.../*.go where ... ∈ {mssql, mysql, oracle, pop3, redis, rdp, smb, telnet, vnc, rsync, kerberos, ldap, smtp}
Many memoized helpers gained a ctx context.Context parameter; internal helpers and network calls switched from context.TODO()/Background() to caller-provided contexts (often ctx or c.nj.Context()), plus related import adjustments.
SMB specifics
pkg/js/libs/smb/...
Propagate ctx through memoized SMB helpers, SMBv2 metadata collection, SMBGhost detection, and use ctx for dialing/negotiation.
MySQL / MSSQL / Oracle specifics
pkg/js/libs/mysql/*, pkg/js/libs/mssql/*, pkg/js/libs/oracle/*
Memoized helpers accept ctx; connect/ping/query and dial operations use ctx; Oracle dialer stores ctx and connector dialer refreshed per call.
Redis specifics
pkg/js/libs/redis/*
Memoized redis helpers accept ctx; client Ping/Info/Eval and RunLuaScript switched to context-aware calls.
Small dial context fixes
pkg/js/libs/kerberos/sendtokdc.go, pkg/js/libs/ldap/ldap.go, pkg/js/libs/smtp/smtp.go
Replace context.TODO() with kclient.nj.Context() / c.nj.Context() so dialing uses runtime-provided context.

Sequence Diagram(s)

sequenceDiagram
    participant Scheduler as Caller / Scheduler
    participant Compiler as JS Compiler
    participant Ctx as Deadline Context
    participant Pool as Pool (ephemeraljsc / pooljsc)
    participant Runtime as JS Runtime
    participant Watchdog as Watchdog Goroutine
    participant Network as Network Operation

    Scheduler->>Compiler: request execution (with timeout)
    Compiler->>Ctx: derive deadline-bearing ctx
    Compiler->>Pool: AddWithContext(ctx)
    alt acquisition succeeds
        Pool->>Compiler: slot acquired
        Compiler->>Runtime: inject "executionId" and "ctx"
        Compiler->>Network: perform ctx-aware network calls
        Note right of Network: network operations observe ctx.Done()
        Network-->>Compiler: return / error
        Compiler->>Pool: normal release (deferred Done via atomic CAS)
    else acquisition fails
        Pool-->>Compiler: return deadline error (fail-fast)
    end
    par Watchdog lifecycle
        Watchdog->>Ctx: monitor ctx.Done()
        Ctx->>Watchdog: deadline fires
        Watchdog->>Pool: Done() if slot still held (atomic CAS)
    and Cleanup
        Compiler->>Watchdog: signal done to stop
        Compiler->>Pool: deferred Done() (atomic CAS prevents double)
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰
Slots once held by zombies late at night,
Now watchdogs wake to free them with a bite.
Contexts woven into every dial and call,
Matches find their way back home from the sprawl.
🥕

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.76% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed All coding requirements from issue #6894 are fully implemented: context propagation to pool slot acquisition (AddWithContext), watchdog goroutines for deadline-triggered slot release, execution context threading through 16 JS library protocol files, and memoization template updates to exclude context parameters.
Out of Scope Changes check ✅ Passed All changes directly address the root causes identified in #6894: pool slot starvation via watchdog and AddWithContext, zombie goroutine mitigation through context propagation in network calls, and supporting infrastructure (NucleiJS.Context helper, memogen template updates, comprehensive tests).
Title check ✅ Passed The PR title accurately summarizes the main changes: implementing a watchdog mechanism for pool slot management and propagating execution context to JS library network calls.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch 6894-fix-js-pool-starvation

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
pkg/js/libs/postgres/postgres.go (1)

122-147: ⚠️ Potential issue | 🟠 Major

Use QueryContext to pass context to the SQL query.

Line 139 still calls db.Query(query) instead of db.QueryContext(ctx, query). The context parameter in the function signature is not used for the actual query execution, which prevents cancellation and deadline propagation from being applied to the SQL operation.

💡 Proposed fix
-	rows, err := db.Query(query)
+	rows, err := db.QueryContext(ctx, query)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/js/libs/postgres/postgres.go` around lines 122 - 147, In executeQuery,
the provided ctx is never used for the SQL operation—replace the call to
db.Query(query) with db.QueryContext(ctx, query) so the query honors
cancellation/deadlines; locate the db variable opened via sql.Open
(PGWrapDriver) and change the query invocation to use QueryContext with the same
ctx and query string, then ensure the returned rows (rows) are properly closed
after use (defer rows.Close()) and propagate any error handling as before.
pkg/js/libs/smb/smb.go (1)

119-155: ⚠️ Potential issue | 🟠 Major

listShares loses context cancellation after the TCP dial.

Line 129 respects ctx for the initial Fastdialer connection, but the raw net.Conn is handed to SMB operations (lines 143 and 151) without any deadline enforcement. If the SMB session setup or share enumeration stalls, the goroutine can survive past the execution deadline.

The connection's deadline must be set from ctx before SMB operations begin:

Proposed fix
 	conn, err := dialer.Fastdialer.Dial(ctx, "tcp", fmt.Sprintf("%s:%d", host, port))
 	if err != nil {
 		return nil, err
 	}
+	if deadline, ok := ctx.Deadline(); ok {
+		_ = conn.SetDeadline(deadline)
+	}
 	defer func() {
 		_ = conn.Close()
 	}()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/js/libs/smb/smb.go` around lines 119 - 155, listShares currently uses
dialer.Fastdialer.Dial with ctx but then hands the raw net.Conn to SMB calls
without applying the context deadline; fix by enforcing ctx on the conn after
Dial: get the deadline via deadline, ok := ctx.Deadline() and if ok call
conn.SetDeadline(deadline) (and optionally clear/reset after SMB ops with
conn.SetDeadline(time.Time{})); if there is no deadline, start a goroutine that
closes conn on ctx.Done() to ensure cancellation; apply this before calling
d.Dial(conn) and before calling s.ListSharenames() (symbols: listShares,
dialer.Fastdialer.Dial, conn, smb2.Dialer (d.Dial), s.ListSharenames) so SMB
setup/enumeration cannot outlive the context.
pkg/js/libs/telnet/telnet.go (1)

183-193: ⚠️ Potential issue | 🟠 Major

Info and GetTelnetNTLMInfo ignore the remaining execution deadline after connect.

Dial(ctx, ...) only covers connection setup. Info immediately hands the socket to telnetmini.DetectEncryption(conn, 7*time.Second), which does not accept a context parameter and sets a fixed deadline internally. GetTelnetNTLMInfo resets the deadline to a fresh 10-second window. If the parent execution has less time remaining, both paths can keep the worker alive after ctx.Done(). Set the socket deadline to min(ctx.Deadline(), now+opTimeout) before the helper/manual I/O, or extend DetectEncryption to accept a context parameter.

Possible fix
+ deadline := time.Now().Add(7 * time.Second)
+ if ctxDeadline, ok := ctx.Deadline(); ok && ctxDeadline.Before(deadline) {
+ 	deadline = ctxDeadline
+ }
+ if err := conn.SetDeadline(deadline); err != nil {
+ 	return TelnetInfoResponse{}, err
+ }
  encryptionInfo, err := telnetmini.DetectEncryption(conn, 7*time.Second)
- _ = conn.SetDeadline(time.Now().Add(10 * time.Second))
+ deadline := time.Now().Add(10 * time.Second)
+ if ctxDeadline, ok := ctx.Deadline(); ok && ctxDeadline.Before(deadline) {
+ 	deadline = ctxDeadline
+ }
+ if err := conn.SetDeadline(deadline); err != nil {
+ 	return nil, err
+ }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/js/libs/telnet/telnet.go` around lines 183 - 193, The Info and
GetTelnetNTLMInfo flows ignore the parent ctx deadline because
telnetmini.DetectEncryption takes a fixed timeout instead of a context; fix by
setting the connection deadline explicitly before calling DetectEncryption or
doing manual I/O: compute the deadline as the minimum of ctx.Deadline() (if
present) and time.Now().Add(opTimeout) (use 7s in Info and 10s in
GetTelnetNTLMInfo or unify to a constant), then call
conn.SetDeadline(minDeadline) so the socket operations respect the overall ctx
timeout; ensure you restore/clear deadlines if further I/O needs a different
deadline or close the conn on ctx cancellation.
🧹 Nitpick comments (3)
pkg/js/libs/mssql/mssql.go (1)

60-60: Minor: ctx parameter accepted but unused in connect.

The ctx parameter is added for memoization key exclusion but isn't used within the function. The SQL operations rely on the connection string's connection+timeout=30 parameter instead. This is acceptable given that the critical dial path in isMssql now uses the context.

Consider using db.ExecContext(ctx, "select 1") instead of db.Exec("select 1") on line 85 to respect the execution deadline during connection validation:

♻️ Optional improvement
-	_, err = db.Exec("select 1")
+	_, err = db.ExecContext(ctx, "select 1")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/js/libs/mssql/mssql.go` at line 60, The connect function currently
accepts ctx but doesn't use it when validating the DB connection; update the
validation call in connect (the db.Exec("select 1") invocation) to use
db.ExecContext(ctx, "select 1") so the execution respects the provided
context/deadline; this change targets the connect function and the specific Exec
-> ExecContext replacement for connection validation.
pkg/js/compiler/pool_starvation_test.go (1)

31-48: Make these starvation tests synchronize on state changes instead of sleeping.

Right now the assertions depend on time.Sleep(100 * time.Millisecond) being enough for every zombie to grab a slot and time.Sleep(200 * time.Millisecond) being enough for every watchdog to free it. That is brittle on busy CI runners, and the 10-second sleeper goroutines keep running after the test returns. A ready/released channel (or require.Eventually) plus t.Cleanup to unblock/join the workers would make these tests deterministic.

Also applies to: 75-121

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/js/compiler/pool_starvation_test.go` around lines 31 - 48, Replace the
fragile time.Sleep synchronization in the starvation tests (the zombie
goroutines that call pool.Add()/pool.Done() inside the
executeWithoutPooling/executeWithPoolingProgram simulation) with explicit
synchronization: have each zombie signal when it has acquired the slot via a
"ready" channel or WaitGroup and block on a release channel that the test
controls; use require.Eventually or channel receives to wait deterministically
for all zombies to be ready instead of Sleep(100*time.Millisecond), and after
assertions send on the release channel and use t.Cleanup to close/unblock and
join remaining goroutines (or wait on zombieWg) so the 10s sleepers don't
outlive the test; apply the same pattern to the watchdog-release checks (replace
Sleep(200*time.Millisecond) with waiting on a "released" notification).
pkg/js/compiler/pool_test.go (1)

71-81: Wait on the actual slot-release event here instead of fixed sleeps.

time.Sleep(20 * time.Millisecond) and time.Sleep(200 * time.Millisecond) are just guesses about scheduler latency, so these tests can fail on slow runners even when the pool logic is correct. Have the watchdog path signal a channel after pool.Done() (or assert with require.Eventually) and wait on that instead.

Also applies to: 130-141

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/js/compiler/pool_test.go` around lines 71 - 81, The test uses fixed
sleeps to wait for the watchdog to release a slot which is flaky; change the
test to wait for an explicit release event instead of sleeping by having the
watchdog path signal a channel (or use require.Eventually) when it calls
pool.Done(), then replace both time.Sleep(20*time.Millisecond) and the long
context timeout guessing with a blocking wait on that channel (or an Eventually
assertion) before attempting pool.AddWithContext/freshCtx and calling
pool.Done(); reference the pool.AddWithContext and pool.Done calls and the
watchdog release signal in your change so the test deterministically waits for
the actual slot-release event.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/js/libs/rdp/rdp.go`:
- Around line 212-215: In checkRDPEncryption, after Dial returns inside the
protocol loops, set the connection deadline using dialCtx.Deadline() (e.g.,
conn.SetDeadline(dialCtx.Deadline())) before calling testRDPProtocol or
testRDPCipher so post-dial Read/Write cannot block past the context; also remove
the per-iteration defer cancel() and defer conn.Close() and instead call
cancel() and conn.Close() explicitly at the end of each loop iteration to ensure
timely cleanup; update both loops that invoke dialer.Fastdialer.Dial and then
call testRDPProtocol/testRDPCipher to follow this pattern.

---

Outside diff comments:
In `@pkg/js/libs/postgres/postgres.go`:
- Around line 122-147: In executeQuery, the provided ctx is never used for the
SQL operation—replace the call to db.Query(query) with db.QueryContext(ctx,
query) so the query honors cancellation/deadlines; locate the db variable opened
via sql.Open (PGWrapDriver) and change the query invocation to use QueryContext
with the same ctx and query string, then ensure the returned rows (rows) are
properly closed after use (defer rows.Close()) and propagate any error handling
as before.

In `@pkg/js/libs/smb/smb.go`:
- Around line 119-155: listShares currently uses dialer.Fastdialer.Dial with ctx
but then hands the raw net.Conn to SMB calls without applying the context
deadline; fix by enforcing ctx on the conn after Dial: get the deadline via
deadline, ok := ctx.Deadline() and if ok call conn.SetDeadline(deadline) (and
optionally clear/reset after SMB ops with conn.SetDeadline(time.Time{})); if
there is no deadline, start a goroutine that closes conn on ctx.Done() to ensure
cancellation; apply this before calling d.Dial(conn) and before calling
s.ListSharenames() (symbols: listShares, dialer.Fastdialer.Dial, conn,
smb2.Dialer (d.Dial), s.ListSharenames) so SMB setup/enumeration cannot outlive
the context.

In `@pkg/js/libs/telnet/telnet.go`:
- Around line 183-193: The Info and GetTelnetNTLMInfo flows ignore the parent
ctx deadline because telnetmini.DetectEncryption takes a fixed timeout instead
of a context; fix by setting the connection deadline explicitly before calling
DetectEncryption or doing manual I/O: compute the deadline as the minimum of
ctx.Deadline() (if present) and time.Now().Add(opTimeout) (use 7s in Info and
10s in GetTelnetNTLMInfo or unify to a constant), then call
conn.SetDeadline(minDeadline) so the socket operations respect the overall ctx
timeout; ensure you restore/clear deadlines if further I/O needs a different
deadline or close the conn on ctx cancellation.

---

Nitpick comments:
In `@pkg/js/compiler/pool_starvation_test.go`:
- Around line 31-48: Replace the fragile time.Sleep synchronization in the
starvation tests (the zombie goroutines that call pool.Add()/pool.Done() inside
the executeWithoutPooling/executeWithPoolingProgram simulation) with explicit
synchronization: have each zombie signal when it has acquired the slot via a
"ready" channel or WaitGroup and block on a release channel that the test
controls; use require.Eventually or channel receives to wait deterministically
for all zombies to be ready instead of Sleep(100*time.Millisecond), and after
assertions send on the release channel and use t.Cleanup to close/unblock and
join remaining goroutines (or wait on zombieWg) so the 10s sleepers don't
outlive the test; apply the same pattern to the watchdog-release checks (replace
Sleep(200*time.Millisecond) with waiting on a "released" notification).

In `@pkg/js/compiler/pool_test.go`:
- Around line 71-81: The test uses fixed sleeps to wait for the watchdog to
release a slot which is flaky; change the test to wait for an explicit release
event instead of sleeping by having the watchdog path signal a channel (or use
require.Eventually) when it calls pool.Done(), then replace both
time.Sleep(20*time.Millisecond) and the long context timeout guessing with a
blocking wait on that channel (or an Eventually assertion) before attempting
pool.AddWithContext/freshCtx and calling pool.Done(); reference the
pool.AddWithContext and pool.Done calls and the watchdog release signal in your
change so the test deterministically waits for the actual slot-release event.

In `@pkg/js/libs/mssql/mssql.go`:
- Line 60: The connect function currently accepts ctx but doesn't use it when
validating the DB connection; update the validation call in connect (the
db.Exec("select 1") invocation) to use db.ExecContext(ctx, "select 1") so the
execution respects the provided context/deadline; this change targets the
connect function and the specific Exec -> ExecContext replacement for connection
validation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9b614c7e-59cd-4218-bf0d-8235c992d6c9

📥 Commits

Reviewing files that changed from the base of the PR and between c8a9853 and 455e2a9.

📒 Files selected for processing (38)
  • cmd/memogen/function.tpl
  • pkg/js/compiler/compiler.go
  • pkg/js/compiler/non-pool.go
  • pkg/js/compiler/pool.go
  • pkg/js/compiler/pool_starvation_test.go
  • pkg/js/compiler/pool_test.go
  • pkg/js/libs/kerberos/sendtokdc.go
  • pkg/js/libs/ldap/ldap.go
  • pkg/js/libs/mssql/memo.mssql.go
  • pkg/js/libs/mssql/mssql.go
  • pkg/js/libs/mysql/memo.mysql.go
  • pkg/js/libs/mysql/memo.mysql_private.go
  • pkg/js/libs/mysql/mysql.go
  • pkg/js/libs/mysql/mysql_private.go
  • pkg/js/libs/oracle/memo.oracle.go
  • pkg/js/libs/oracle/oracle.go
  • pkg/js/libs/oracle/oracledialer.go
  • pkg/js/libs/pop3/memo.pop3.go
  • pkg/js/libs/pop3/pop3.go
  • pkg/js/libs/postgres/memo.postgres.go
  • pkg/js/libs/postgres/postgres.go
  • pkg/js/libs/rdp/memo.rdp.go
  • pkg/js/libs/rdp/rdp.go
  • pkg/js/libs/redis/memo.redis.go
  • pkg/js/libs/redis/redis.go
  • pkg/js/libs/smb/memo.smb.go
  • pkg/js/libs/smb/memo.smb_private.go
  • pkg/js/libs/smb/memo.smbghost.go
  • pkg/js/libs/smb/smb.go
  • pkg/js/libs/smb/smb_private.go
  • pkg/js/libs/smb/smbghost.go
  • pkg/js/libs/smtp/smtp.go
  • pkg/js/libs/telnet/memo.telnet.go
  • pkg/js/libs/telnet/telnet.go
  • pkg/js/libs/vnc/memo.vnc.go
  • pkg/js/libs/vnc/vnc.go
  • pkg/js/utils/nucleijs.go
  • pkg/js/utils/pgwrap/pgwrap.go

Comment thread pkg/js/libs/rdp/rdp.go
…7302)

* refactor(js): use `context.Background` as default instead

Signed-off-by: Dwi Siswanto <git@dw1.io>

* refactor(js): derived the ctx to remaining tractable deadline leaks

Signed-off-by: Dwi Siswanto <git@dw1.io>

* test(js): add `NucleiJS.Context` tests

Signed-off-by: Dwi Siswanto <git@dw1.io>

* fix(cmd): context param exclusion in memoization hash

The memoization template condition for excluding
context parameters from hash keys was incorrect.
The memoize package represents context.Context
types as "&{context Context}" (AST string
representation), not "context.Context".

Signed-off-by: Dwi Siswanto <git@dw1.io>

* chore(js): memogen'ed

Signed-off-by: Dwi Siswanto <git@dw1.io>

---------

Signed-off-by: Dwi Siswanto <git@dw1.io>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
pkg/js/libs/rsync/rsync.go (1)

111-127: Pass context to listModules and listFilesInModule helper functions.

The IsRsync method correctly propagates context through the entire call chain (IsRsync(ctx)memoizedisRsync(ctx, ...)isRsync(ctx, ...)connectWithFastDialer(ctx, ...)), but ListModules and ListFilesInModule extract the execution context and then discard it before calling their underlying helpers. The listModules and listFilesInModule functions should accept context parameters and pass them to rsynclib.ListModules and rsynclib.SocketClient respectively (currently passing nil for the first parameter in SocketClient). This ensures execution deadlines and timeouts are properly respected across all network calls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/js/libs/rsync/rsync.go` around lines 111 - 127, ListModules and
ListFilesInModule drop the context before calling their helpers; update the
helpers to accept and propagate ctx so timeouts/deadlines are honored. Change
calls from listModules(executionId, ...) and listFilesInModule(executionId, ...)
to include ctx (e.g., listModules(ctx, executionId, host, port, username,
password) and listFilesInModule(ctx, executionId, host, port, username,
password, module)), update those helper signatures accordingly, and inside those
helpers pass ctx into rsynclib.ListModules and into rsynclib.SocketClient
(replace the nil ctx currently passed to SocketClient with the real ctx). Ensure
other callers like IsRsync/memoizedisRsync/isRsync/connectWithFastDialer remain
unchanged and that executionId extraction still occurs where needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@pkg/js/libs/rsync/rsync.go`:
- Around line 111-127: ListModules and ListFilesInModule drop the context before
calling their helpers; update the helpers to accept and propagate ctx so
timeouts/deadlines are honored. Change calls from listModules(executionId, ...)
and listFilesInModule(executionId, ...) to include ctx (e.g., listModules(ctx,
executionId, host, port, username, password) and listFilesInModule(ctx,
executionId, host, port, username, password, module)), update those helper
signatures accordingly, and inside those helpers pass ctx into
rsynclib.ListModules and into rsynclib.SocketClient (replace the nil ctx
currently passed to SocketClient with the real ctx). Ensure other callers like
IsRsync/memoizedisRsync/isRsync/connectWithFastDialer remain unchanged and that
executionId extraction still occurs where needed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 89db7ce1-122d-49ae-8f12-40d8ade43650

📥 Commits

Reviewing files that changed from the base of the PR and between 455e2a9 and 25322ef.

📒 Files selected for processing (12)
  • cmd/memogen/function.tpl
  • pkg/js/libs/mssql/mssql.go
  • pkg/js/libs/mysql/mysql.go
  • pkg/js/libs/oracle/oracle.go
  • pkg/js/libs/oracle/oracledialer.go
  • pkg/js/libs/postgres/memo.postgres.go
  • pkg/js/libs/postgres/postgres.go
  • pkg/js/libs/rsync/memo.rsync.go
  • pkg/js/libs/rsync/rsync.go
  • pkg/js/utils/nucleijs.go
  • pkg/js/utils/nucleijs_test.go
  • pkg/js/utils/pgwrap/pgwrap.go
🚧 Files skipped from review as they are similar to previous changes (5)
  • cmd/memogen/function.tpl
  • pkg/js/utils/pgwrap/pgwrap.go
  • pkg/js/libs/postgres/postgres.go
  • pkg/js/libs/oracle/oracledialer.go
  • pkg/js/libs/postgres/memo.postgres.go

by bounding socket I/O

Add `setConnDeadlineFromContext` helper to set
deadlines on conns derived from context timeouts.
Move conn cleanup out of loop-scoped defers to
make sure immediate cleanup per probe attempt.

Signed-off-by: Dwi Siswanto <git@dw1.io>
@neo-by-projectdiscovery-dev
Copy link
Copy Markdown

neo-by-projectdiscovery-dev Bot commented Apr 1, 2026

Neo - PR Security Review

No security issues found

Hardening Notes
  • The changes fix a common Go anti-pattern (defer in loops) that could cause connection and context exhaustion
  • Connection deadline propagation ensures network operations respect context timeouts, improving reliability

Comment @pdneo help for available commands. · Open in Neo

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pkg/js/libs/rdp/rdp.go (1)

54-63: ⚠️ Potential issue | 🟠 Major

Use context.WithTimeout consistently in isRDP and checkRDPAuth to enforce a unified deadline budget.

Both functions dial with the incoming ctx but apply a separate hardcoded 5s timeout post-dial via rdp.DetectRDP and rdp.DetectRDPAuth. This decoupling allows:

  • The dial itself to consume time from ctx without capping
  • Post-dial operations to run an additional 5 seconds independently

Mirror the checkRDPEncryption pattern instead: derive a child context with context.WithTimeout(ctx, 5*time.Second) before dialing, pass it to Dial, and use setConnDeadlineFromContext to propagate the deadline to the connection. This ensures the total operation stays within a single 5-second budget.

Also replace fmt.Sprintf("%s:%d", host, port) with net.JoinHostPort(host, strconv.Itoa(port)) for IPv6 compatibility.

Literal IPv6 addresses require brackets in host:port notation. fmt.Sprintf produces invalid addresses like ::1:3389, while net.JoinHostPort correctly formats them as [::1]:3389. The checkRDPEncryption function already uses the correct approach.

Also applies to: 112-121

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/js/libs/rdp/rdp.go` around lines 54 - 63, Wrap the entire dial+detect
sequence in a child context with a 5s timeout (ctx, cancel :=
context.WithTimeout(ctx, 5*time.Second); defer cancel()) and pass that child
context to dialer.Fastdialer.Dial instead of the original ctx; after Dial, call
setConnDeadlineFromContext(conn, ctx) to propagate the deadline to the
connection, then call rdp.DetectRDP (and rdp.DetectRDPAuth in the other block)
without a separate hardcoded timeout parameter so the deadline is unified
(mirror checkRDPEncryption's pattern). Also replace fmt.Sprintf("%s:%d", host,
port) with net.JoinHostPort(host, strconv.Itoa(port)) when building the address
for Dial to ensure IPv6 correctness. Apply the same changes for the other
occurrence that calls rdp.DetectRDPAuth.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/js/libs/rdp/rdp.go`:
- Line 55: The dial target is built with fmt.Sprintf("%s:%d", host, port) which
fails for IPv6; update the calls that pass the host:port string to
dialer.Fastdialer.Dial to use net.JoinHostPort(host, strconv.Itoa(port)) instead
(e.g., replace the conn, err := dialer.Fastdialer.Dial(ctx, "tcp",
fmt.Sprintf("%s:%d", host, port)) call and the other similar occurrence with
net.JoinHostPort to ensure bracketed IPv6 addresses are handled correctly).

---

Outside diff comments:
In `@pkg/js/libs/rdp/rdp.go`:
- Around line 54-63: Wrap the entire dial+detect sequence in a child context
with a 5s timeout (ctx, cancel := context.WithTimeout(ctx, 5*time.Second); defer
cancel()) and pass that child context to dialer.Fastdialer.Dial instead of the
original ctx; after Dial, call setConnDeadlineFromContext(conn, ctx) to
propagate the deadline to the connection, then call rdp.DetectRDP (and
rdp.DetectRDPAuth in the other block) without a separate hardcoded timeout
parameter so the deadline is unified (mirror checkRDPEncryption's pattern). Also
replace fmt.Sprintf("%s:%d", host, port) with net.JoinHostPort(host,
strconv.Itoa(port)) when building the address for Dial to ensure IPv6
correctness. Apply the same changes for the other occurrence that calls
rdp.DetectRDPAuth.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7c92425d-8e1a-42d5-960e-9a67b6901a5a

📥 Commits

Reviewing files that changed from the base of the PR and between 25322ef and 2140738.

📒 Files selected for processing (1)
  • pkg/js/libs/rdp/rdp.go

Comment thread pkg/js/libs/rdp/rdp.go

timeout := 5 * time.Second
conn, err := dialer.Fastdialer.Dial(context.TODO(), "tcp", fmt.Sprintf("%s:%d", host, port))
conn, err := dialer.Fastdialer.Dial(ctx, "tcp", fmt.Sprintf("%s:%d", host, port))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if the file exists and examine its content
cd /tmp/repository && find . -name "rdp.go" -type f

Repository: projectdiscovery/nuclei

Length of output: 132


🏁 Script executed:

# List the repository structure
ls -la pkg/js/libs/rdp/ 2>/dev/null || echo "Directory not found, checking broader structure"

Repository: projectdiscovery/nuclei

Length of output: 299


🏁 Script executed:

# Search for the rdp.go file
find . -path "*pkg/js/libs/rdp*" -type f 2>/dev/null | head -20

Repository: projectdiscovery/nuclei

Length of output: 121


🏁 Script executed:

# Examine the rdp.go file around lines 55 and 113
cat -n pkg/js/libs/rdp/rdp.go | sed -n '50,60p'

Repository: projectdiscovery/nuclei

Length of output: 436


🏁 Script executed:

# Check line 113
cat -n pkg/js/libs/rdp/rdp.go | sed -n '108,118p'

Repository: projectdiscovery/nuclei

Length of output: 497


🏁 Script executed:

# Search for net.JoinHostPort usage in the file
rg "JoinHostPort|CheckRDPEncryption" pkg/js/libs/rdp/rdp.go -n

Repository: projectdiscovery/nuclei

Length of output: 752


🏁 Script executed:

# Check imports to see what's already imported
head -30 pkg/js/libs/rdp/rdp.go

Repository: projectdiscovery/nuclei

Length of output: 715


Use net.JoinHostPort for these dial targets.

fmt.Sprintf("%s:%d", host, port) breaks literal IPv6 targets because Go expects bracketed IPv6 host:port strings like [::1]:3389. net.JoinHostPort handles this correctly by checking the host format and adding brackets when needed. The file already uses this pattern in CheckRDPEncryption at lines 214 and 255, and the necessary imports (net, strconv) are already present.

🔧 Minimal fix
- fmt.Sprintf("%s:%d", host, port)
+ net.JoinHostPort(host, strconv.Itoa(port))

Also applies to: 113-113

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/js/libs/rdp/rdp.go` at line 55, The dial target is built with
fmt.Sprintf("%s:%d", host, port) which fails for IPv6; update the calls that
pass the host:port string to dialer.Fastdialer.Dial to use
net.JoinHostPort(host, strconv.Itoa(port)) instead (e.g., replace the conn, err
:= dialer.Fastdialer.Dial(ctx, "tcp", fmt.Sprintf("%s:%d", host, port)) call and
the other similar occurrence with net.JoinHostPort to ensure bracketed IPv6
addresses are handled correctly).

@dwisiswant0 dwisiswant0 changed the title fix(js): prevent pool slot starvation and zombie goroutines under load fix(js): watchdog + propagates context to all JS library network calls Apr 1, 2026
@dwisiswant0 dwisiswant0 merged commit 3537030 into dev Apr 1, 2026
21 checks passed
@dwisiswant0 dwisiswant0 deleted the 6894-fix-js-pool-starvation branch April 1, 2026 07:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] JS protocol templates silently drop matches under load

3 participants