-
-
Notifications
You must be signed in to change notification settings - Fork 554
Mazy Mice #2312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Mazy Mice #2312
Conversation
This commit introduces a new exercise named 'Mazy Mice'. The exercise is designed to implement a maze generator that produces perfect mazes, i.e., mazes that have only one correct path, without any isolated sections. User must also adhere to the restrictions about the maze's size and layout depicted in the problem description. Tasks also include a description of how the mazes should be visually represented. The commit also includes canonical data for testing the described functionality.
Hello. Thanks for opening a PR on Exercism. We are currently in a phase of our journey where we have paused community contributions to allow us to take a breather and redesign our community model. You can learn more in this blog post. As such, all issues and PRs in this repository are being automatically closed. That doesn't mean we're not interested in your ideas, or that if you're stuck on something we don't want to help. The best place to discuss things is with our community on the Exercism Community Forum. You can use this link to copy this into a new topic there. Note: If this PR has been pre-approved, please link back to this PR on the forum thread and a maintainer or staff member will reopen it. |
Adjusted the format of the 'Mazy Mice' exercise description to enhance legibility. Added separate lines before the maze example sections to clearly differentiate them from the surrounding text. These changes improve the user's reading experience, making it easier for them to understand the task.
Please add a forum link to this discussion. |
|
Thanks for creating the PR! I am not entirely sure how useful the canonical data is right now as the current format does not really allow for easy test generation. This is not something this PR can do a ton about, as randomness is really hard. It might be worth looking at how we deal with randomness in other canonical data:
I think the idea of a maze generator is really cool, I'd just like us to think a bit on how to best structure the canonical data to allow it to be as helpful as it can be for implementing tracks. CC @exercism/reviewers |
Expanded the comment section in the canonical-data.json file for the "mazy-mice" exercise to provide a more detailed guideline on the required checks for a generated maze. These include confirming correct maze dimensions, valid character use, singular entrance and exit, perfection of the maze and its randomness.
Indentation in the maze diagrams within the mazy-mice exercise was inconsistent. The changes ensure tabs are uniformly used for all lines to improve readability and consistency in presentation.
Headings levels in the "mazy-mice/description.md" were adjusted for better organization. '# Hints' was changed to '## Hints' and '## Maze generation' and '## Box drawing characters' changed to '###'. This allows for an improved logical hierarchy and readability of the document.
Removed the Box-drawing reference link from the code block area to the bottom of the document under the "Maze generation" section. This change was proposed to enhance the readability of the document and provide the link at a more appropriate location.
Expanded test coverage for the 'generateMaze' functionality in the Mazy Mice exercise. This includes tests for maze dimensions, character validity, maze entrance and exit, and more. Additionally, replaced 'createMaze' with 'generateMaze' in existing test cases to ensure consistency. These changes aim to provide a more comprehensive evaluation of the generateMaze function's correctness and robustness.
I have updated the canonical data with additional test cases and provided a Java implementation, which can be found at exercism/java#2355. However, I am encountering an issue with my markdown formatting and the CI/CD process is failing as a result. I am unsure of what steps to take to correct the formatting error. |
@rabestro You can format the file using:
|
The expected values are still string values. Did you look at the examples I listed? If so, what did you think? CC @exercism/reviewers for thoughts on how to best structure this canonical data |
Changed the casing of a markdown hyperlink reference to maintain consistency. Also adjusted the formatting of the 'Box drawing characters' table for better legibility. These changes aim to improve the readability and adherence of the documentation to markdown standards.
fixed, thank you! |
In the 'exercises/mazy-mice/canonical-data.json' file, the results format for each test case has been updated. Instead of providing an expected result as a string, a boolean or an object structure is used. The change is performed to improve the test result validation process. Boolean values help to clearly identify the pass/fail status, while the object returns the expected maze dimensions, allowing for more detailed automated test result evaluations.
Did you mean to close it? |
I changed to numbers and boolean values |
Renamed exercise from "maze" to "mazy-mice" in metadata. Added new validation rule enforcing rows and columns to be within 5 to 100, along with a test case for rows less than the minimum allowed.
Introduce Mickey and Minerva, the main characters of the Mazy Mice exercise. The introduction sets the stage for the maze-solving challenge, emphasizing a single correct path to guide users.
Moved content from description to instructions for clarity and streamlined examples section. Removed redundant box-drawing character table while keeping key details for maze generation intact.
The link reference "[box-drawing]" was not used anywhere in the document, so it has been removed to clean up the file. This change improves clarity and eliminates unnecessary clutter.
Co-authored-by: Erik Schierboom <[email protected]>
│ └─┐ ┌─┐ │ │ │ ├── ├───┐ │ │ ──┼── │ | ||
│ │ │ │ │ │ │ │ │ │ │ │ | ||
└── │ │ ├───┴───┤ ┌─┘ ┌─┘ │ ├── │ ──┤ | ||
⇨ │ │ │ │ │ │ │ │ │ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are still some wide arrows in the documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are still some wide arrows in the documentation.
Yes, if I understood correctly we can keep UTF-8 symbols in description for information (illustration) purposes. The illustration rendered correctly for this exercise (AWK and Java tracks). I've deleted the table with symbols as mentioned by @ErikSchierboom
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you mean it's rendered correctly on the website? That's not the only place where people will see it. It's rendered badly here on GitHub and it will be rendered badly in all the various editors people use locally. It's kind of a minor thing but I don't see the benefit of using a fancy arrow symbol in ascii art instead of just >
, ->
or =>
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the past i've used this one ➜
in some of my PRs and it seems to be rendered fine in editors and the website.
Some general feelings of mine about this exercise: I don't like exercises with randomness. They are hard to test properly. I think this can be seen here pretty well, where the test data is not very useful. Basically the only information in every test case is the description of what property should be checked. Like Erik said, you can't use that to generate the actual test cases. Every track will have to write the test cases from scratch. I also have a feeling that it's pretty easy to generate a maze that technically complies with the properties but would be very boring to solve. I have a very opinionated idea: We could force users to implement a specific algorithm to generate the maze. Then we know what kind of "random" decision the algorithm will have to make at every step. We can supply the list of decisions of the algorithm as input and the precise maze that must be generated as data to test against. cons:
pros:
|
@senekor Would the maze generation algorithm be easy to explain in steps? |
You mean easy to explain to students? I guess it depends on the algorithm we choose. There's a list here. I think it doesn't matter much if we choose an easy or a difficult one, in either case the difficulty is at least standardized. This will also make it easier for tracks to select an appropriate difficulty setting. |
I'm a bit torn. I quite like the randomness aspect as that fits my mental model or generating mazes. That said, the implementation of its tests is a lot harder. Maybe it would be useful if someone tried implementing this exercise in their language of choice. |
[I don't know how useful this will be, I have not read the PR and I'm only reacting to the last comment, but] we have a concept exercise in the Elm track that's about generating a random maze with random treasure in it. |
I think it would be pretty easy to make most tests deterministic and then have one or more random ones at the end. Those wouldn't be primarily for checking correctness, but just to drive home the feeling of generating mazes randomly. That would even teach the additional valuable lesson that you can split programs with randomness in a deterministic part that can still be well tested. Basically the "functional core, imperative shell" concept. |
I'd like to get some more people's thoughts on this. @exercism/reviewers would you mind chiming in here? |
Should this be on the forum and targeted at maintainers, who would be the ones building the tests? Is the concern about the complexity and maintenance around keeping tests reliable? The cost of needing to write a maze solver to test this exercise? Glenn and I have already approved this exercise in the On the one hand, it's a fun and interesting exercise for students to solve. On the other hand, writing the tests for it would definitely be harder than most (all?) of the other exercises by a large margin. Is that an issue if maintainers can opt to omit this exercise? Would clear docs with guidance on writing tests help? |
Add Practice Exercise: Mazy Mice
Exercise Description:
This pull request introduces a new practice exercise called "Mazy Mice".
Mickey and Minerva are two clever mice who love to navigate mazes to find cheese. They prefer mazes that have only one correct path to the cheese, with no loops or inaccessible sections.
The goal of this exercise is to write a program that generates "perfect" mazes. A perfect maze is defined as a maze where every cell is reachable and there is exactly one path between any two cells. The generated maze should be rectangular, with an entrance on the left and an exit on the right.
Key Features of the Exercise:
┌
,─
,│
,└
,┘
,├
,┤
,┬
,┴
,┼
) for walls and passages, and an arrow symbol (⇨
) for the entrance and exit.This PR includes a
canonical-data.json
file with test cases covering:This exercise aims to provide a challenging and engaging problem involving algorithm design, data structure manipulation (representing the maze), and careful output formatting.