Closed
Description
This issue catalogs friction experienced with the "old" LSP "marker tests" in internal/lsp/testdata (run by internal/lsp/tests).
These tests have long been tricky to work with, and recently have caused a significant amount of friction while making changes to error messages in go/parser and go/types.
Notable problems:
- test output include noisy LSP logs for the entire test session. It would be better if these logs were scoped to the actual failing package, or completely excluded.
- test output includes red-herring "errors" that are not actually related to the test failure
- tests are all run in the same session / workspace, so changes in far-away files can affect e.g. completion results
- auto-generated test names (which include the annotation position) are not stable, and confusing
- failure messages can be hard to read, because they do a poor job of highlighting differences between expected and actual output.
- tests often match error messages too precisely, resulting in churn when error messages change across Go versions
- test annotations are not documented; it is not clear how to add new annotations
- tests run in multiple contexts (as tests for the internal/lsp/source, internal/lsp/cmd and gopls packages), and this is not clearly documented, nor are the differences between these contexts made clear. (and the necessity for all three contexts is not clear)
- tests use
summary*.txt.golden
files (depending on go version) as checksums to ensure that the expected number of tests ran. These are (by construction!) change detectors.