-
-
Notifications
You must be signed in to change notification settings - Fork 195
bob: Run tests from JSON #487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Please don't do that! As the tests are now, I do have a single file where I can see the type AND the value of a sample, while for the json stuff I had to look into the testsuite for the Haskell type (which might totally different than the json type) and for the actuall value(s) I had to look into the JSON. For more complex types than the basic ones, the student would also need to understand how that json record/map/array/whatever is converted into a Haskell type… |
Don't worry! I'm not rushing any unpopular change. 😄
Makes sense. Thanks! I usually assume that it is inconvenient to look at the test suite code, so we provide type signatures for all exercises, but that solves just part of the problem.
I agree that, even with the Of course, the differences between the JSON and What if we could eliminate the need to look at the test suite?! If we could display the data from the failed test would that solve your problem? |
I just updated the PR to minimize the need to look at the test test suite, which may be inconvenient as @NobbZ pointed out. Now if a test case fails it will print the evaluated expression, the expected value and value got. |
Interesting. Of the existing tracks, only perl5 and perl6 currently do this:
I'll think about this one when I have the time. |
Are the advantages/disadvantages listed in relation to the current situation, or are they rather in relation to using a generator? (Actually, there may be some of both, but they can be hard to tell apart). The primary challenge to overcome when using a generator is combining the generated cases with the rest of the test file (if we put them in the same file), otherwise I guess the advantages/disadvantages are similar to vs the baseline. |
I thought about advantages in relation to the current situation but, because the logic is mostly the same, moving in this direction would also favor the use of generators. Now, the major obstacle to both approaches is probably exercism/problem-specifications#336. I have some ideas about a JSON schema to solve that, but I want to experiment a little more before proposing anything that is hard to change and would also require a lot of work in other tracks. |
About generators, I never tried to write one, but I think they are a harder than this solution, so I'm postponing experiments with it for now, until I'm more comfortable with |
This last version uses a newly proposed format for the This new version has only 12 exercise-specific lines, and now the I consider that the experiment was successful in the sense that we proved it is possible to generate the test suite at run-time with a minimal amount of exercise-specific code. That said, the preference here seems to be on using offline test generators, which result in more conventional test suites that the users can easily inspect. Unless someone else here thinks that running the tests directly from the JSON file may be a good idea, I will close this issue. |
Every time an exercise changes in
x-common
, we usually have to do one of the following:The former is mostly unavoidable, because
canonical-data.json
doesn't contain enough information to allow deriving Haskell types from it, and this situation will probably never change, as it would force other tracks to deal with a highly complex JSON schema.The latter can be automated, an this is usually done using generators. I propose an alternative!
What if the test suite could read
canonical-data.json
directly?Disadvantages:
aeson
andbytestring
(Edit: +HUnit
).canonical-data.json
inxhaskell
.Advantages
canonical-data.json
validation and testing for free.Proof of concept
The PR applies these ideas to
bob
, which is an easy case. More elaborated cases can be dealt with, but some would benefit from a standardization of the the JSON schema.So...what do you think of it?