-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Why do we skip tests? #105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There's a long discussion about this here: exercism/exercism#856 It's largely an unresolved discussion though. For people who are new to programming, it helps not overwhelm them, and simulates the TDD process. For experienced programmers there's no point in having them in the first place, but they're also trivial to remove. |
Thanks for the link and your explanation, Katrina! I'm beginning to suspect that test skipping may indeed make sense for beginners. For some reason though, the Python track has test skipping only for advanced problems where it doesn't make as much sense. As a matter of fact, a user has written to me asking why he had to delete all the skip decorators. To save users this hassle, I'd like to delete the skip decorators - if you're ok with that, @kytrinyx? |
Yeah, that sounds good to me. Thanks for discussing! |
The following exercises contain tests marked with the decorator
@unittest.skipUnless('NO_SKIP' in os.environ, "Not implemented yet")
:In order to run all tests in these exercises the user is expected either to comment out/delete the decorators or to set the
NO_SKIP
environment variable.While both methods are trivial on *NIX systems with the necessary knowledge of environment variables or a tool like awk, they are more troublesome on a computer running Windows.
I must admit that I've never quite understood why exactly we skip some tests anyway.
Can somebody explain this? What would be lost if we simply delete all these decorators?
The text was updated successfully, but these errors were encountered: