-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Interrupting test execution on first exception (through assertion extension API) #1485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @jugglinmike, thanks for your suggestion. I'll respond more in depth later, but for now, for context, could you share how the integration tests happen within an AVA |
Sure. Here's an example test for a "Todo list" application: test.afterEach(t => { t.context._passed = true; });
test.afterEach.always(function(t) {
if (t.context._passed) {
return;
}
return t.context.driver.saveScreenshot(t.title);
});
test('filtering', async (t) => {
var driver = t.context.driver;
await driver.create('first');
await driver.create('second');
await driver.create('twenty-third'); // intentional test bug
await driver.complete(1);
t.deepEqual(await driver.readItems(), ['first', 'second', 'third']);
await driver.filter('Active');
t.deepEqual(await driver.readItems(), ['first', 'third']);
await driver.filter('Completed');
t.deepEqual(await driver.readItems(), ['second']);
}); (I've omitted details about creating the The first assertion is violated due to an intentional test bug. Under Ava's By changing the three invocations of |
Interesting. We actually have #261 which proposes that we report all subsequent assertion failures within a test, which is quite the opposite of what you're looking for. Would it help if the |
Another approach might be to decorate |
My goal is to give reviewers a clear picture of the entire problem. While this
I initially prototyped something like this, but I'm not comfortable maintaining
I might be able to help more if I understood your reluctance. For the time |
Having the option will tempt people to enable it, even though they have no use for it. Indeed we want to run all assertions and provide an log that is even more complete than we have now. I think I understand your use case and I'd like to support it, but I think this is better done in "user space". The question then is how AVA can provide the necessary hooks that you or another library author can build on top of.
No this might work actually. E.g. You could either screenshot in the event handler or we could have another mechanism for an You wouldn't even have to wrap each assertion method, you'd just need to wrap the test implementations to intercept the |
Due to that specific deficiency in Ava's logging implementation, I expect many But that's clearly conjecture on my part. I'm pointing it out just in case it
This satisfies my use case. Compared to wrapping assertion methods, it also |
@jugglinmike yea, we'll have to see how things work out when we report multiple failures from the same test. Presumably we'd highlight the first and de-emphasize the others. The hope is that including them gives a fuller picture of why the test is failing.
Yea, we could add a sanctioned API so the event handler can perform the throw.
No worries. Let's leave this issue open, if you or anybody else has the time then we can flesh out implementation details and make it happen. |
Sounds good to me! |
I'm not sure if this is the right issue, but #1560 points here, so I'd like to add another use-case for caught assertions: I just wrote some business-logic assertions and unit-tests for them, but the only way I can find to say "This input should fail its assertion" is to mark the test as failing. That seems against the "temporary" spirit of Trivial example: const urlShouldBe = function(t, observed, expectedHost, expectedPath) {
t.is(observed, `${expectedHost}/${expectedPath}`);
}
test('urlShouldBe should fail when domain is wrong', (t) => {
try {
urlShouldBe(t, 'google.com/test', 'example.com', 'test');
t.fail('Should have failed this test!');
} catch {
// OK
}
}); ( (Having a way to add functions to the test instance |
@edbrannin so you're trying to test your custom assertions (which are wrappers around For your use case I'd stub the Alternatively your current approach isn't all bad, at least until we land #261. You don't even need the |
To recap, assertion failures cannot be observed by the test implementation. For tests that repeatedly have to wait on a slow operation this means that the test run takes longer that strictly necessary. My proposal is to have a way to configure AVA so it throws an otherwise irrelevant exception when an assertion fails, thus preventing the remainder of the test implementation from executing. I don't want to default to this or make it seem like something you should enable to make tests "run faster". We're also interested in reporting multiple assertion failures. #1692 proposes a new feature whereby tests can observe the results of their assertions. @jugglinmike's WebDriver problem could be solved by writing an AVA-specific adapter that uses this feature. I'd like to see how that works before implementing the proposal I suggested in this thread. (Potentially, rather than hooking into an event we'd let you configure |
This also makes tests more difficult to debug using logs. In my case, I've got all events being logged (such as jobs being executed) which continue to run even after an assertion fails. So in order to figure out which logs actually led up to the failure, I have to insert a |
Please consider my pull request #2078 |
I've been using AVA for my latest test project and this is something that has taken me by surprise. Interrupting execution on first assertion is the common behavior of most test frameworks out there and AVA is the very first test framework that does not do that, at least in my experience. In addition to the issues mentioned above, I would like to add: lack of proper documentation of such behavior. I could only find a related sentence in https://github.com/avajs/ava/blob/master/docs/03-assertions.md#assertions stating:
which, imo, is not enough to clearly understand that execution won't be interrupted upon an assertion failure and, on the other hand, could confuse people coming from different test frameworks. In my specific case, this behavior is making my test suite slower since I'm asserting after some explicit setup steps and the thing is that the rest of the test including slow UI interactions and the rest of the assertions do not make sense anymore since they are destined to fail, so what's the point to keep on executing them? Arguably, you can say that my test are designed incorrectly and you may be right, but I feel I would have designed my tests differently if I had known of this behavior since the beginning. That said, I have some bandwith to try to fix this so I'm open to discuss any approach you may already have. |
Hi @arcesino, Per #1485 (comment) we're working on a Besides that, clarifications to the documentation are always welcome. |
This comment has been minimized.
This comment has been minimized.
@novemberborn I'm curious how you think that test("test runner carries on despite assertions", async t => {
const numerator = 4;
const denominator = 0;
t.not(denominator, 0);
t.log("Should not get here (but does)");
const result = numerator / denominator;
t.false(isNaN(result));
}); could be rewritten somehow like this? test("turgid simulation of expected assertion behavior", async t => {
const numerator = 4;
const denominator = 0;
const res1 = await t.try((tt, d) => {
tt.not(d, 0);
}, denominator);
res1.commit();
if (res1.passed) {
t.log("Otherwise we would carry on");
const result = numerator / denominator;
t.false(isNaN(result));
}
}); I have to agree with the earlier comments that this makes debugging difficult, and I have failed to find a rationale for this highly unexpected design. At the very least, it is misleading to use the term "assertions" for AVA's built-in functions, if they have no impact on control flow. The existence of a "fail fast" option only further complicates matters, as it becomes impossible to determine the behavior of a test by looking at it (or, in the case of TypeScript, write a useful signature leveraging CFA). If anything, I would expect the situation to be reversed, where assertions normally throw but could be used in a special |
@gavinpc-mindgrub yes, for "expensive" test step that you wish wouldn't execute, you could wrap the previous step in a Of course this only makes sense if you have "expensive" steps within a single test. Node Tap does something similar, although it allows top-level assertions and nested tests. Could you elaborate on how this makes debugging difficult for you? |
Hi @novemberborn, thanks for the reply. First I want to apologize for the harsh tone that I used in my earlier post. It has been a long, sleepless week for me, and although I did not intend to be critical without being constructive, I see in retrospect that I was. Thank you, and the rest of the team, for your work on this useful project. To answer your question, I would say that not knowing about this behavior cost me some considerable time, as @karimsa put it,
I was questioning much more basic aspects of reality, and how I could possibly be seeing what I could be seeing, before realizing that the most recent log from a certain point was not the one associated with "the" failed assertion. Now that I know it, of course, I can adjust by interpreting the output according to this understanding and, more often, by temporarily throwing as necessary to make this correlation more certain. Moreover, I have reconsidered some aspects of my test writing practice. In particular, I was using assertions in many places to, e.g. check the validity of a response before looking at further aspects of it (as those further checks would be pointless against a missing object). Although our team has historically used assertions for such checks, I can see how a In short, although I haven't found a way to use the current behavior to advantage, it's something we can live with, probably even without resorting to the "fail fast" option. That said, we also use TypeScript heavily and would benefit from a corresponding set of assertions that did throw, if only to take advantage of type narrowing. I would welcome changes such as are being discussed in #2450 and others. We have some wrappers of our own for the most common cases, and now I understand that they didn't in fact mean what they were saying. Thanks again! |
Hey @gavinpc-mindgrub, no worries, and thanks. Undoubtedly we could improve our reporter and make it clearer when assertions failed, and when logs were emitted, relative to each other. That should help with debugging. What's interesting about running tests is that they tend to pass! So while you need to make sure your test doesn't pass when it should have failed, you don't necessarily have to be defensive either. A null-pointer crash will still fail your test. While I don't think we should add a top-level configuration to change this behavior, I am experimenting with a way of creating more specialized test functions, which could be set up to fail immediately when an assertion fails. Keep an eye on #2435 for the low-level infrastructure work. |
I've filed #2455 to have our assertions return booleans, which would go some way towards helping folks prevent "expensive" operations if a test has already failed. |
Closing in favor of #3201. |
As of version 0.21.0, Ava's built-in assertions do not influence the execution
of the test body. Specifically, when an assertion is violated, the test
continues to execute.
This behavior was previously reported as a bug (via gh-220). It was considered
"fixed" (via gh-259) with a patch that modified Ava's output.
This is a feature request for a behavior change: forcibly interrupt test
execution when an assertion fails via a runtime exception.
My use case is taking screen shots to assist debugging of integration test
failures. I have been able to react to failing tests programatically through a
combination of the
afterEach
andafterEach.always
methods. When a testfails, I would like to capture an image of the rendered application as this
can be very useful in identifying the cause of the failure (especially when the
tests run remotely on a continuous integration server).
Because the test body continues to execute following the failure, by the time
the
afterEach.always
method is invoked, the rendered output may no longerreflect the state of the application at the moment of failure.
For unit tests, this might be addressed by making test bodies shorter and more
direct. Reducing tests to contain only one meaningful interaction would avoid
the effect described above. Because integration tests have high "set up" costs
and because they are typically structured to model complete usage scenarios,
this is not an appropriate solution for my use case.
Ava supports the usage of a general-purpose assertion library (e.g. as
Node.js's built-in
assert
module), and I've found that because theselibraries operate via JavaScript exceptions, they produce the intended results.
In the short-term, I am considering switching to one of these libraries.
However, Ava's built-in assertions have a number of advantages over generic
alternatives. In addition, restricting the use of Ava's API in my test suite
will be difficult moving forward--even with documentation in place,
contributors may not recognize that certain aspects of the API are considered
"off limits", especially considering that their usage does not directly effect
test correctness.
I haven't been able to think of a use case that would be broken by the change I
am requesting. But if there is such a use case, then this behavior could be
made "opt-in' via a command-line flag.
Thanks for your time, and thanks for the great framework!
The text was updated successfully, but these errors were encountered: