-
Notifications
You must be signed in to change notification settings - Fork 218
Is it possible to have stable IDs across full run/individual test runs? #837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We could generate an id by hashing the name? Then we could potentially add another flag to run tests by id instead of name? |
It seems like it would also be useful to have a command that would just give you the list of available tests in a file, which would also have stable ids associated with each test? |
If we use a hash strategy we'd need to handle cases where tests have the same name, but I think that's doable. Another option to consider is to discover test cases using the outline of the file from the analysis server rather than based on the json output from the test runner. One of the element kinds is Caveats - I don't know how this handles things like |
This would improve some cases, but the name is more likely to change than the position in the file, so I think it's less useful than having the IDs match across a full run versus single test run.
That would definitely be useful, because we could populate the list of tests at project open then. Even better if it can do all suites - since running with debugger, we can only do one file at a time (I don't know if that's a
We could add the numeric ID in to it, but then we're back to the original issue that it'd need to be a stable numeric ID so running a single test still gives the same one.
I did briefly discuss this recently with Devon, but it sounded like there wasn't a clear way to map from that data to the test run/results?
I was wondering about that too... I figured it should be fine, since presumably all code is executed except for the body of the test (it'd need to be to check if the name matches what was passed?), so if a loop is outside of the |
Not sure I agree with that assertion ;). Once a test is written it should pretty rarely be renamed, but new tests might be added anywhere in the file (to an existing group, or simply added to the top of the file instead of the bottom). |
Yeah, I'm not actually sure what I was thinking when I said that! Something else I was thinking of just before - when users click the test in my runner, I jump to that location. Bug VS Code doesn't let me "unselect" the tree node, so as you move around the code in the editor, you have a "random" test node selected. I could sync the editor with the selector test, but I'd need the end positions of them (currently we only get start positions). It's starting to seem like if we could sync up the Outline data from the analysis server with these tests reasonable well, we'd have a better experience (for example another wonky thing right now is if you change your tests and then click one in the tree, it can jump you to totally the wrong place). Feels a bit like I'm inventing lots of new work though =) |
I'm working on integrating a tree for test results into VS Code:
This uses the JSON API (via
flutter test
, though eventually it'll handle non-flutter tests). I've added the ability for the user to run individual tests from the tree, but I've hit a problem with matching up nodes in the tree with results that come back which means if the user chooses to run a single test, I end up having to throw away all of the other nodes.Let's say I run a suite of 10 tests and get back the IDs 1-10. The user then says to run test 5. Since I'm only running by name, it's possible I'll get multiple test results back, but the IDs won't match the previous runs so I can't just update the relevant nodes in the tree (I tried this, but it ended up overwriting node 1 with the new results, leaving a duplicate at node 5).
Is there a way that we could get back matching IDs when we run individual tests (for ex. return the full test list, but only executing the selected test)? I understand the list isn't guaranteed to match the previous one because the user could've changed code, but at least in the case of tweaking and re-running a single test the tree would remain stable.
(Or, if you have any other ideas for handling this, I'm all ears!).
The text was updated successfully, but these errors were encountered: