-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Run individual tests in separate processes #421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This would be huge, not only for polyfill dev, but anything that modifies global state.
Sometimes it's worth it. For example in https://github.com/sindresorhus/get-stdin I had to split my tests into two files as they somehow conflicted with each other.
That won't be a goal either. We'll just run it normally there. Running files in parallel will also be gone when run in the browser. |
Not sure why that would need to be true. Just open a tab per test-file (setting this flag). |
Oh, didn't realize that was possible. Make sure the mention that in #24 ;) |
@sindresorhus: You changed your profile picture! As for the actual topic at hand, I'm not really sure either way. On one Also, then before === beforeEach? That's something else we'd have to think
|
@ariporad Test hooks would still work as we would execute the file as normal, but only run a specific test (with hooks) instead of all. |
Yep, the only issue would be: before(t => setup database()) |
@sindresorhus: I know, but then the before hook would have to be run for
|
Ah, that's true. |
Maybe add |
I'd rather look into solutions that don't require adding two new methods with very similar semantics. |
@jamestalmage: Yeah, didn't you hear? @sindresorhus has decreed that there shall be only one possible way to do anything! 😄 |
@ariporad Not at all. I just want us to explore other options before adding more API surface. Adding new methods and options are easy. Keeping things minimal while powerful is very hard. |
@sindresorhus: that was like 98% joke, sorry. On Fri, Jan 15, 2016 at 6:14 AM ⌁ sɪɴᴅʀᴇ sᴏʀʜᴜs ⌁ [email protected]
|
@ariporad I know, but when rereading my previous comment I sounded a bit dictatorial, so just wanted to clarify ;) |
That picture is awesome |
Have no idea how I missed this thread, just stumbled upon it. I think it's a bad idea. Not only it will dramatically slow down tests, but also introduce a lot of mess in the AVA core. Quick example that will require "custom trickery" - serial tests. They won't work after this change. In my opinion, isolation per file is more than enough. New node process + babel witchcraft per each tests is a huge overhead. |
I certainly don't think it should be the default behavior. But for certain situations (polyfills / etc) I think the benefits are so huge, they outweigh the downside. |
I see both up and downsides. This issue is just to explore it. It's very likely we'll never do this. |
For A |
I'm pretty sure it is an insane idea at the moment, so we should not even try to do it now :) |
I don't see why it should be considered insane. There are real downsides to performance, so I don't think it can ever be the default, but there are also real benefits. Perhaps the solution might be to use a function installPolyfill() {
}
function setup() {
// other setup??
}
test.before.fork(setup); // run as before in this process, and every forked one.
test.before(installPolyfill); // only run in this process, not in forked ones.
test.fork('pre/post install check 1', t => {
// isolated process
// setup has already run, but installPolyfill has not
t.notOk(check1());
installPolyfill();
t.ok(check1());
});
test.fork('pre/post install check 2', t => {
// isolated process
// setup has already run, but installPolyfill has not
t.notOk(check2());
installPolyfill();
t.ok(check2());
});
test('foo', t => {
// shared process with 'bar'
// setup and installPolyfill have both run
});
test('bar', t => {
// shared process with 'foo'
// setup and installPolyfill have both run
}); This gets around performance issues by only creating additional forks for the few tests where complete isolation is necessary (I think pre/post polyfill checks is a good example) - the majority of your tests can stay fast. I think the API also helps make sense of the "single I still think we have higher priorities, but I would like to see this implemented eventually. |
Sure, but I don't see this being something we prioritize for 1.0.0. It's a nice to have, but not essential at all. |
👍 Of course at that point you could also make separate test files for the one or two forks you need. |
True. Though if that number is 6/7 this has a lot more appeal. Especially if there is some amount of code sharing (helper functions, etc) |
@jamestalmage I like your proposal. I think it should be implemented as an extension to ava. |
This just seems far too niche and therefore destined to be a low priority issue that we'll never land. I'm closing this issue. If you're reading this and are thinking "but this would be perfect for my use case" please chime in. |
@novemberborn chiming in. :) So, our use case for this (wonderful idea, btw) is testing various hooking of Node.js standard streams (stdout/err). We do that kind of stuff a lot because we run arbitrary user code in a sort of sandbox, and we need to hook the streams in order to process them and present them to the user. Needless to say, such hooking is really problematic if you try doing it more than once in the same process. So a fork per test would be wonderful. I admit it's not a mainstream use-case, but if any of you guys could think of a way of doing this without actually building it into Ava, then I would be very thankful. Right now we have to resort to "docker container per test case" level hacks, and it's not fun. :( |
@lavie thanks for your comment. I'd put each test in its own test file. Or, alternatively, write a fixture that is run in a child process. Then you can have all the tests in one file but still control the streams inside the fixture. (You'd probably want to use |
Chiming in. Albeit with caveats. There are some other (social) processes that are triggering my situation. Was told "We're using ava for tests, not anything else".. why? It's better, it runs everything in parallel, so speed. Hmm.. but every test I see is marked with "serial()", "Well those tests are bad, good tests can be run in parallel". Why are they all marked with serial? "Because we're using sinon, not anything else". And: 🤷♀ |
Right now we fork per test file, but not per individual test method. We have discussed the latter a few times in other threads, and finally decided to open an issue to discuss.
Pros:
Cons:
The text was updated successfully, but these errors were encountered: