fix(core): add LSP diagnostics caching and document refresh fallback#3034
fix(core): add LSP diagnostics caching and document refresh fallback#3034vadimLuzyanin wants to merge 2 commits intoQwenLM:mainfrom
Conversation
Add publishDiagnostics notification handler in LspServerManager that caches diagnostics and supports pending diagnostic promises. When textDocument/diagnostic pull fails in NativeLspService, fall back to force-refreshing the document (didClose + didOpen) and reading from the cached diagnostics. Also fix review findings: - Clear cache/pending maps on server restart in resetHandle() - Clean up pending diagnostics entries on timeout - Re-track URI in openedDocuments on refresh failure - Filter workspaceDiagnostics fallback by workspace root URI Fixes QwenLM#3029 Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
4a09091 to
a6315cb
Compare
- Clear stale cache entry before refresh (not after) to prevent wiping fresh publishDiagnostics data that arrives during the delay - Re-add URI to openedDocuments after successful didOpen to keep local tracking in sync - Use continue instead of return to aggregate diagnostics from all servers in multi-server setups - Filter workspaceDiagnostics fallback by all workspace root URIs from WorkspaceContext for multi-root workspace support Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
7998f6f to
e6bd43d
Compare
| `LSP textDocument/diagnostic failed for ${name}:`, | ||
| error, | ||
| ); | ||
|
|
There was a problem hiding this comment.
[Suggestion] handle.cachedDiagnostics.delete(uri) runs unconditionally before the if (openedForServer?.has(uri)) check. If the URI is not tracked in openedDocuments, the refresh block is skipped entirely but the cache entry has already been destroyed — valid cached diagnostics are lost.
| const openedForServer = this.openedDocuments.get(name); | |
| if (openedForServer?.has(uri)) { | |
| handle.cachedDiagnostics.delete(uri); | |
| openedForServer.delete(uri); |
— qwen3.6-plus via Qwen Code /review
|
|
||
| // Force-refresh the document: send didClose + didOpen to trigger fresh analysis | ||
| handle.cachedDiagnostics.delete(uri); | ||
| const openedForServer = this.openedDocuments.get(name); |
There was a problem hiding this comment.
[Suggestion] Removing the URI from openedForServer before the async refresh creates a race window. During the delay(), a concurrent ensureDocumentOpen call would see the URI as "not opened" and send a duplicate didOpen, which is undefined behavior per the LSP spec.
Consider keeping the URI in openedForServer and using a separate "refreshing" Set to track URIs currently being refreshed.
— qwen3.6-plus via Qwen Code /review
| handle.pendingDiagnostics!.set(uri, { resolve }); | ||
| }), | ||
| new Promise<void>((resolve) => { | ||
| setTimeout(() => { |
There was a problem hiding this comment.
[Suggestion] The pendingDiagnostics Map stores a single { resolve } per URI. If two callers invoke diagnostics() for the same URI concurrently, the second .set() overwrites the first caller's resolver. When the notification arrives, only the second caller resolves; the first hangs until the 5s timeout.
Consider storing an array of resolvers per URI:
| setTimeout(() => { | |
| const pending = handle.pendingDiagnostics.get(uri); | |
| if (!pending) { | |
| handle.pendingDiagnostics.set(uri, { resolves: [] }); | |
| } | |
| handle.pendingDiagnostics.get(uri)!.resolves.push(resolve); |
And resolve all in the notification handler.
— qwen3.6-plus via Qwen Code /review
| }, | ||
| }, | ||
| }); | ||
| await this.delay(DEFAULT_LSP_DOCUMENT_OPEN_DELAY_MS * 5); |
There was a problem hiding this comment.
[Nice to have] DEFAULT_LSP_DOCUMENT_OPEN_DELAY_MS * 5 is a magic number with no comment explaining why 5x is needed. If the base constant changes, this delay changes proportionally which may not be intended.
Consider defining a named constant like LSP_DIAGNOSTIC_REFRESH_DELAY_MS = 1000.
— qwen3.6-plus via Qwen Code /review
| handle.error = undefined; | ||
| handle.warmedUp = false; | ||
| handle.stopRequested = false; | ||
| handle.cachedDiagnostics.clear(); |
There was a problem hiding this comment.
[Nice to have] handle.pendingDiagnostics.clear() silently drops all pending resolvers. Any in-flight diagnostics() callers waiting on Promise.race will hang for the full 5s timeout instead of failing fast.
Consider rejecting all pending resolvers with a "server restarted" error before clearing the map.
— qwen3.6-plus via Qwen Code /review
|
Reaching here for missing diagnostics from gopls (Golang LSP). I wonder if we can fix this by sending textDocument/didChange or textDocument/didSave after edit tool? This looks simpler, and should not invalidate too much cache by close and re-open file. (Not tested yet, just my guess) |
TLDR
Fixes missing LSP diagnostics by caching
textDocument/publishDiagnosticsnotifications and using them as a fallback whentextDocument/diagnosticpull requests fail. Also adds document refresh (didClose + didOpen) to trigger fresh analysis from the LSP server.Screenshots / Video Demo
N/A — internal change, verified end-to-end: the fixed build reports all TypeScript diagnostics while the original build returns none. Full flow confirmed in a single session: diagnose → fix → re-diagnose.
Dive Deeper
The LSP
textDocument/diagnosticpull request fails on some LSP servers (e.g.,typescript-language-serverin certain configurations), leaving users with no diagnostics. This change adds a push-based fallback:LspServerManagernow listens topublishDiagnosticsnotifications from the server and caches them. When the pull fails,NativeLspServiceforce-refreshes the document (didClose + didOpen) and reads from the cache, with a 5s timeout viaPromise.race. The same cache is also used as a fallback forworkspaceDiagnosticswhenworkspace/diagnosticfails, filtered by workspace root URI.Reviewer Test Plan
Create a TypeScript file with intentional type errors in a workspace with
.lsp.jsonconfigured fortypescript-language-server. Run with--experimental-lspand ask for diagnostics — should report all errors. Add another error to the file and ask again — should include the new error. Then ask the CLI to fix the errors and re-check diagnostics — should report zero remaining.Testing Matrix
Linked issues / bugs
Fixes #3029