Skip to content

Clock has too many tests to run at once when starting #393

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
paradime opened this issue Nov 10, 2016 · 3 comments
Closed

Clock has too many tests to run at once when starting #393

paradime opened this issue Nov 10, 2016 · 3 comments

Comments

@paradime
Copy link

Hey, I just completed the clock exercise and had an issue with the test suite. Because there are so many tests, it's hard to use TDD when all of the tests are failing at the same time. In the ruby challenges, we typically skip unimplemented tests and remove the skip after we get the last test to pass. This allows us to use the previous tests as a regression suite while focusing on the new piece of functionality.

@paradime
Copy link
Author

Closing because this is a duplicate of #189 and #171

@behrtam
Copy link
Contributor

behrtam commented Nov 10, 2016

Just out of curiosity. You would prefer to edit the test suite everytime you made a test green instead of running py.test -x --ff bob_test.py which would need you to install pytest beforehand (http://exercism.io/languages/python/tests)? To me both isn't optimal.

@paradime
Copy link
Author

Yea, I skipped over the instructions because I thought they'd be similar to the ruby/javascript/elixir tracks where it might look something like this:

def test_that_needs_to_pass_first():
     expect(my_thing).is.equal_to(expectation1)

def xtest_that_needs_to_run():
     expect(my_thing).is.equal_to(expectation2)

so by default, the only test that's run is the first one. Then when you make that pass, you 'unskip' (the x represents a skip) or 'uncomment out' that test.

There also might be a concern that there might be too many tests. a 50 case test suite for what is effectively and object with a function on it, seems like a lot.

I personally have a vim workflow, where I'm looking at the test, and my code, and a terminal, then remove 'pending' from my tests as I get them to go. This way I can focus on getting one test to pass, then if that passes, did it break any other functionality.

I might also be possible to section off parts of a test suite so you can focus on making a piece of functionality piece pass. For example:

#Tests that have to do with the clock object
def test1():
  ...

def test2():
  ...

...
#Tests that have to do with the .add function. These don't run unless you uncomment this line

def test_add1():
  ...

def test_add2():
  ...

...

Again, I'm not too familiar with TDD in a python workflow, I'm just bringing my experiences from other languages where the workflow was very smooth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants