-
Notifications
You must be signed in to change notification settings - Fork 243
Add option to split report #177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thank you for the patch @audricschiltknecht - this looks like a powerful feature indeed! Unfortunately I don't feel like I can give this the time it deserves for the next couple of weeks due to some upcoming travel. I did notice that the tests are failing, so perhaps that's something you could look into? I would also appreciate if you could add new tests for this feature. Are there default categories that can be split, or is it necessary for categories to be added to the report? Perhaps some additional documentation would make this clearer. Whilst I'm away, if there are any other contributors that would like to provide feedback on this patch, that would really me help on my return. ❤️ |
Hi! This patch includes quite some changes, and would indeed deserve a bit of documentation. I'll also have a look at the tests, they passed fine on my local machine. For now, there are no default categories defined. So if you do not pass the Enjoy your travel! |
Just to add that I'm now back from my travels (for now). Please add a comment when you've addressed the tests and documentation, and are ready for me to take a look over this. |
Hello! I took a look at the failing tests: node and python3.6. I am not really sure what is wrong for the node tests since it looks like an issue with the headless chrome:
I can reproduce the same issue on my local computer. Would appreciate any help/hint for that. For python3.6, not sure what is wrong as it seems the test is stuck. I will try to reproduce it locally. I will also try to update the documentation in my spare time :) |
Hi guys, I've been looking for something like this so tried out your code - it splits on the given attribute correctly (very nice!) but looks like it has some bugs. For example:
Here is the report I generated from my mock test project: The mock test project I used to generate it is here: You can use :) |
Hello, I apologize, I haven't found time to work on this yet... @khornlund, thanks for the test case. It is interesting because tests seem to include "#" in their name, thus ending up generating some I will need to look into that, thanks for the input! |
@audricschiltknecht Any updates on this? Please let us know if you need any assistance! :) |
This is done in preparation for split reports. Make the JS works on relative element, ie. specifying the table, rows, node that we are working on. Indeed, once reports will be split, there will be more than one eg. result table in the HTML, so we need to pass along on which one we are working.
* Create new "--split-by" option. This option can be specified multiple times. All values passed to this option must be set on the report in the pytest_runtest_makereport() hook. Then multiple reports will be generated and grouped by same values. * Update JS to find proper not-found message. * Fix CSS. * Update tests.
An interesting use-case is that keys can be any kind of string. As we turn them into HTML id in the generated output, they need to be valid HTML identifier (roughly number, A-Z and -._). Add a function to turn a string into a valid id, and convert our keys.
* Keep existing behaviour when not grouping * When grouping, display a general summary containing complete number of tests ran and duration on top of the page (after environment and before links to specific sections). * For each section, display number of test ran + the outcome checkboxes to filter dipslayed test results.
542e17f
to
e4f5488
Compare
Hello, Sorry for not finding the time to work on this before.
I still need to add some tests. |
I am looking forward to see this refreshed and also a link to sample report generated with it, it will make much easier to evaluate and review it. |
Hey! I was wondering what is up with this feature? I want to generate pytest report grouped by each test file, but I did not find any documentation on group-by in the latest pytest-html version! Please let me know as soon as possible. If this feature is not supported by pytest-html, I would have to look for some other HTML report generation to cover my requirements |
This feature is not available yet. It will be in a not so distant future, however I can't really give an exact time frame. Sorry. |
Would @audricschiltknecht be interested in the reviving this for v4? |
Hello, I'll see what I can do, but I think I'll need some help as it has been some times since I've played with pytest-html. |
Yes, it's basically a full rewrite. Happy to help! |
Hi All, |
What creates that size? That particular scale may need a backing database |
@RonnyPfannschmidt The Assertion failure and it trace is enough for me to debug, |
I had a look at it in the beginning of this year, but unfortunately, the code has changed too much and I basically would need to rewrite from scratch. As I don't work with python much these days, it has dropped out of my TODO-list. I'm going to close that PR as I don't want to keep people's hope up by making promise that I cannot fulfill, but anyone is welcomed to use the code as a base if they need to. |
Add a new "--split-by" option that allows to split report per user-specified keys.
This is close to what was described in #33 (I think), but gives the user the flexibility to choose how to split the reports.
To do this, one must call pytest passing along one or more "--split-by" values. These are ordered values that will be used as keys to split and group reports in a hierarchy. These keys MUST be present in the report instance. They can be set in eg. the pytest_runtest_makereport hook.
For example, by defining:
then it can be called with
pytest --split-by language --split-by lib
and will generate reports grouped by language/lib.If not option is passed in the
pytest
invocation, then the current, unified report is generated.