-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Decide on track curriculum #1099
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'm not sure what the original intent behind core exercises was, or how other tracks are using them. If someone can provide a good argument for designating a subset of our <100 exercises as
I'd also like to add "set
Not a bad idea, particularly for the first exercise in the track containing a new Python-specific concept (magic methods, context managers, etc.) It would be best to do this after the reorder. In fact, (intentional or not) the order in which these tasks were presented is probably the order they should be completed. |
I think that in Nextercism, there will be core exercises that unlock additional exercises. The aim of this is for the |
@N-Parsons yes, you are absolutely right.
Added in the list of tasks Such organization can also help us focus on main exercises |
Well if it's used in nextercism, then yes we will want to determine core exercises. I had thought it was just a legacy attribute. |
I have been asked for input on this topic as it was me that complained about the parallel-letter-frequency exercise. I am not a contributor (to this track). Only a student. There are three, non-trivial, views I would like to express. Apologies in advance if do not seem relevant to you. Who Cares if a Level 5 exercise occurs between two Level 1 exercises ?Well, who cares if this intimidates beginners enough they drop out ? Well, who cares if this encourages otherwise diligent students to find ways to skip an exercise ? (Is there an 'proper' way to skip an exercise ?) Well, who cares if this encourages till now honest students to submit a blank (not an incomplete but a not even attempted) solution so they can plagiarise someone else's ? But most off all, who cares if students do this and base their solution on a broken solution under the delusion that it must be good because it passes all the tests? Hopefully, everyone on the project cares. How to Detect if an Exercise is Out of Sequence ?I assume that exercises are supposed to get more difficult gradually. An easy exercise at the end of config.json is a waste of time. If contributors followed the guidelines diligently and adjusted the 'canonical' difficulty level to something appropriate for their track and configlet gave no warnings to suggest anything amiss with the order of entries in config.json you would still have a mechanism that is only as good as the assigned level of difficulty. Changing the level of difficulty does not make an exercise easier or more difficult. It's the GIGO principle. The Python track has lots of students. Some statistical analysis of submissions might be meaningful. A simple analysis, ordered by config,json, of how many students attempt each exercise each week should show a gradual decline due to natural wastage. A large drop between two consecutive exercises might be worth a closer look. Likewise an analysis of the average number of iterations per student might suggest exercises that many students don't take in their stride. Likewise how many first (and only) submissions do not pass the tests. If such analyses don't suggest some exercises are giving students more trouble than others, then there is no good reason to change anything. It's the don't fix it until you know is it broken (and can test the fix) principle. Testing the Untestable ?The parallel_letter_test does not test for parallel execution. I submitted a 'test passing' solution that makes no attempt to execute anything in parallel. More than one of the solutions I examined did likewise. I am not criticising the contributor. I am asking why is this test case in the Python track at all ? What purpose does it serve ? I am not familiar with Python's concurrent processing modules but I do know something of the issues involved. Of the other solutions I examined, I judged more than half were broken. Not good examples except perhaps as how not to do things but how is the level 1 student to know ? The pyunit test framework does not, as far as I know, have any support for testing concurrent execution. I thought there might be one that does but I could not find it. The tests do not run for long enough to find those broken solutions that update a single counter without any protection against 'lost updates'. The tests do not measure execution times with and without parallel execution so don't find those broken solutions whose protection against 'lost updates' effectively serialises computation of multiple execution threads. The tests do not even check that the solution imports an appropriate Python module. |
Thanks for raising this @AtelesPaniscus, I've created #1106 for discussion of |
@AtelesPaniscus some good points there. I'll likely as more to my response later after giving it more thought, but for now I'd just like to throw out that there is in fact a proper method to skip exercises. exercism skip <track> <exercise> However, it goes without saying that if you're using this commands often then you might be at your challenge point in the track. |
@m-a-ge Good luck with the todo list this weekend. I agree that For me they were about learning (so as I will never forget) that So, my question is not which exercises are about comprehensions and generators but will these exercises come before or after the string manipulation exercises ? Looking around the solutions provided by other students one might be forgiven for thinking these exercises are about avoiding string manipulation using a "two out of three ain't bad" combo of It seems these exercises are about (premature) optimisation. I am not impressed. My hints would be something along the lines of:
Overall hint: There is more to optimisation than big O notation. Enjoy. |
@AtelesPaniscus thanks a lot for your input and sorry for late reply. |
We have to decide on track curriculum and make changes in
config.json
accordingly (reorder exercises by their difficulty for example since they are served in order, as reported in #891 (comment)).Moreover, it would be awesome to have
hints.md
for each exercise to explain certain concepts helpful for that exercise.TODO:
unlocked_by
accordingly)config.json
Hopefully I'll have some time to work on thin over the next weekends
The text was updated successfully, but these errors were encountered: