Skip to content

Is it possible to have stable IDs across full run/individual test runs? #837

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
DanTup opened this issue May 29, 2018 · 6 comments
Open

Comments

@DanTup
Copy link
Contributor

DanTup commented May 29, 2018

I'm working on integrating a tree for test results into VS Code:

screen shot 2018-05-28 at 3 01 58 pm

This uses the JSON API (via flutter test, though eventually it'll handle non-flutter tests). I've added the ability for the user to run individual tests from the tree, but I've hit a problem with matching up nodes in the tree with results that come back which means if the user chooses to run a single test, I end up having to throw away all of the other nodes.

Let's say I run a suite of 10 tests and get back the IDs 1-10. The user then says to run test 5. Since I'm only running by name, it's possible I'll get multiple test results back, but the IDs won't match the previous runs so I can't just update the relevant nodes in the tree (I tried this, but it ended up overwriting node 1 with the new results, leaving a duplicate at node 5).

Is there a way that we could get back matching IDs when we run individual tests (for ex. return the full test list, but only executing the selected test)? I understand the list isn't guaranteed to match the previous one because the user could've changed code, but at least in the case of tweaking and re-running a single test the tree would remain stable.

(Or, if you have any other ideas for handling this, I'm all ears!).

@jakemac53
Copy link
Contributor

We could generate an id by hashing the name? Then we could potentially add another flag to run tests by id instead of name?

@jakemac53
Copy link
Contributor

It seems like it would also be useful to have a command that would just give you the list of available tests in a file, which would also have stable ids associated with each test?

@natebosch
Copy link
Member

If we use a hash strategy we'd need to handle cases where tests have the same name, but I think that's doable.

Another option to consider is to discover test cases using the outline of the file from the analysis server rather than based on the json output from the test runner. One of the element kinds is UNIT_TEST_TEST which I think can be used to build back up the test name for a given test. I'm pretty sure this is how intellij handles being able to run individual test cases.

Caveats - I don't know how this handles things like test calls in a loop, and I don't think we can support this in the LSP shim on the analysis server without a custom extension.

@DanTup
Copy link
Contributor Author

DanTup commented May 30, 2018

We could generate an id by hashing the name?

This would improve some cases, but the name is more likely to change than the position in the file, so I think it's less useful than having the IDs match across a full run versus single test run.

It seems like it would also be useful to have a command that would just give you the list of available tests in a file, which would also have stable ids associated with each test?

That would definitely be useful, because we could populate the list of tests at project open then. Even better if it can do all suites - since running with debugger, we can only do one file at a time (I don't know if that's a flutter test or pub test limitation). However it'd still be nice when the user ran a single test if we got an atomic/consistent updated list of the tests otherwise we're still potentially trying to mash two trees together (or we'd just end up running twice, once again to get the list and then to run the test).

If we use a hash strategy we'd need to handle cases where tests have the same name, but I think that's doable.

We could add the numeric ID in to it, but then we're back to the original issue that it'd need to be a stable numeric ID so running a single test still gives the same one.

Another option to consider is to discover test cases using the outline of the file from the analysis server

I did briefly discuss this recently with Devon, but it sounded like there wasn't a clear way to map from that data to the test run/results?

Caveats - I don't know how this handles things like test calls in a loop

I was wondering about that too... I figured it should be fine, since presumably all code is executed except for the body of the test (it'd need to be to check if the name matches what was passed?), so if a loop is outside of the test call, I think it'd be fine (but if you can nest test inside test then who knows).

@jakemac53
Copy link
Contributor

This would improve some cases, but the name is more likely to change than the position in the file, so I think it's less useful than having the IDs match across a full run versus single test run.

Not sure I agree with that assertion ;). Once a test is written it should pretty rarely be renamed, but new tests might be added anywhere in the file (to an existing group, or simply added to the top of the file instead of the bottom).

@DanTup
Copy link
Contributor Author

DanTup commented May 30, 2018

Yeah, I'm not actually sure what I was thinking when I said that!

Something else I was thinking of just before - when users click the test in my runner, I jump to that location. Bug VS Code doesn't let me "unselect" the tree node, so as you move around the code in the editor, you have a "random" test node selected.

I could sync the editor with the selector test, but I'd need the end positions of them (currently we only get start positions). It's starting to seem like if we could sync up the Outline data from the analysis server with these tests reasonable well, we'd have a better experience (for example another wonky thing right now is if you change your tests and then click one in the tree, it can jump you to totally the wrong place).

Feels a bit like I'm inventing lots of new work though =)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants