Skip to content

Add option to split report #177

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 8 commits into from

Conversation

audricschiltknecht
Copy link

Add a new "--split-by" option that allows to split report per user-specified keys.
This is close to what was described in #33 (I think), but gives the user the flexibility to choose how to split the reports.

To do this, one must call pytest passing along one or more "--split-by" values. These are ordered values that will be used as keys to split and group reports in a hierarchy. These keys MUST be present in the report instance. They can be set in eg. the pytest_runtest_makereport hook.

For example, by defining:

@pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
    outcome = yield
    report = outcome.get_result()
    <...>
    report.language = f'{lang.title()}'
    report.lib = lib

then it can be called with pytest --split-by language --split-by lib and will generate reports grouped by language/lib.

If not option is passed in the pytest invocation, then the current, unified report is generated.

@davehunt
Copy link
Collaborator

Thank you for the patch @audricschiltknecht - this looks like a powerful feature indeed! Unfortunately I don't feel like I can give this the time it deserves for the next couple of weeks due to some upcoming travel. I did notice that the tests are failing, so perhaps that's something you could look into? I would also appreciate if you could add new tests for this feature.

Are there default categories that can be split, or is it necessary for categories to be added to the report? Perhaps some additional documentation would make this clearer. Whilst I'm away, if there are any other contributors that would like to provide feedback on this patch, that would really me help on my return. ❤️

@audricschiltknecht
Copy link
Author

Hi!

This patch includes quite some changes, and would indeed deserve a bit of documentation. I'll also have a look at the tests, they passed fine on my local machine.

For now, there are no default categories defined. So if you do not pass the --split-by option, you get the current report. If you pass them, then you need to define them in the report instance (eg. using the hook). Maybe we could make some implicit values available? I'm open to suggestions/comments from contributors here.

Enjoy your travel!

@davehunt
Copy link
Collaborator

davehunt commented Oct 4, 2018

Just to add that I'm now back from my travels (for now). Please add a comment when you've addressed the tests and documentation, and are ready for me to take a look over this.

@audricschiltknecht
Copy link
Author

audricschiltknecht commented Oct 10, 2018

Hello!
Sorry, I had and will have very busy days coming up for a while, not much time to work on this, but will try to do my best!

I took a look at the failing tests: node and python3.6.

I am not really sure what is wrong for the node tests since it looks like an issue with the headless chrome:

>> There was an error with headless chrome
Fatal error: Failed to launch chrome!
[0911/160432.361515:FATAL:zygote_host_impl_linux.cc(116)] No usable sandbox! Update your kernel or see https://chromium.googlesource.com/chromium/src/+/master/docs/linux_suid_sandbox_development.md for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox.
#0 0x55decbe78dfc base::debug::StackTrace::StackTrace()
#1 0x55decbdf8150 logging::LogMessage::~LogMessage()
#2 0x55decd2120c0 service_manager::ZygoteHostImpl::Init()
#3 0x55decbae3c1e content::ContentMainRunnerImpl::Initialize()
#4 0x55decbb18128 service_manager::Main()
#5 0x55decbae21a1 content::ContentMain()
#6 0x55decff1de9d headless::(anonymous namespace)::RunContentMain()
#7 0x55decff1df28 headless::HeadlessBrowserMain()
#8 0x55decbb157fa headless::HeadlessShellMain()
#9 0x55dec9f621ac ChromeMain
#10 0x7fdd0c789f45 __libc_start_main
#11 0x55dec9f6202a _start
Received signal 6
#0 0x55decbe78dfc base::debug::StackTrace::StackTrace()
#1 0x55decbe78961 base::debug::(anonymous namespace)::StackDumpSignalHandler()
#2 0x7fdd12414330 <unknown>
#3 0x7fdd0c79ec37 gsignal
#4 0x7fdd0c7a2028 abort
#5 0x55decbe777b5 base::debug::BreakDebugger()
#6 0x55decbdf85b9 logging::LogMessage::~LogMessage()
#7 0x55decd2120c0 service_manager::ZygoteHostImpl::Init()
#8 0x55decbae3c1e content::ContentMainRunnerImpl::Initialize()
#9 0x55decbb18128 service_manager::Main()
#10 0x55decbae21a1 content::ContentMain()
#11 0x55decff1de9d headless::(anonymous namespace)::RunContentMain()
#12 0x55decff1df28 headless::HeadlessBrowserMain()
#13 0x55decbb157fa headless::HeadlessShellMain()
#14 0x55dec9f621ac ChromeMain
#15 0x7fdd0c789f45 __libc_start_main
#16 0x55dec9f6202a _start
  r8: 00007fdd12a11a40  r9: 000019bc96780800 r10: 0000000000000008 r11: 0000000000000206
 r12: 00007ffeb55fd638 r13: 0000000000000161 r14: 00007ffeb55fd640 r15: 00007ffeb55fd648
  di: 0000000000000b8d  si: 0000000000000b8d  bp: 00007ffeb55fd180  bx: 00007ffeb55fd1f0
  dx: 0000000000000006  ax: 0000000000000000  cx: 00007fdd0c79ec37  sp: 00007ffeb55fd048
  ip: 00007fdd0c79ec37 efl: 0000000000000206 cgf: 002b000000000033 erf: 0000000000000000
 trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
[end of stack trace]
Calling _exit(1). Core file will not be generated.
TROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md

I can reproduce the same issue on my local computer. Would appreciate any help/hint for that.

For python3.6, not sure what is wrong as it seems the test is stuck. I will try to reproduce it locally.

I will also try to update the documentation in my spare time :)

@khornlund
Copy link

khornlund commented Feb 6, 2019

Hi guys,

I've been looking for something like this so tried out your code - it splits on the given attribute correctly (very nice!) but looks like it has some bugs. For example:

  1. The tick boxes to filter the tests are broken for me. You can remove the tick but they not longer filter.
  2. If you click a header in a table to sort on that column, the table width shrinks.

Here is the report I generated from my mock test project:
report.zip

The mock test project I used to generate it is here:
RaceTest

You can use __runtests__.py to run or from command line:
pytest tests --html=report.html --split-by race

:)

@audricschiltknecht
Copy link
Author

Hello,

I apologize, I haven't found time to work on this yet...

@khornlund, thanks for the test case. It is interesting because tests seem to include "#" in their name, thus ending up generating some href="#Race #0-title" which is invalid id. I am wondering if that's the reason why you experience some bugs.

I will need to look into that, thanks for the input!

@BeyondEvil
Copy link
Contributor

@audricschiltknecht Any updates on this? Please let us know if you need any assistance! :)

This is done in preparation for split reports. Make the JS works on
relative element, ie. specifying the table, rows, node that we are
working on. Indeed, once reports will be split, there will be more than
one eg. result table in the HTML, so we need to pass along on which one
we are working.
* Create new "--split-by" option. This option can be specified multiple
times. All values passed to this option must be set on the report in the
pytest_runtest_makereport() hook. Then multiple reports will be
generated and grouped by same values.
* Update JS to find proper not-found message.
* Fix CSS.
* Update tests.
An interesting use-case is that keys can be any kind of string. As we
turn them into HTML id in the generated output, they need to be valid
HTML identifier (roughly number, A-Z and -._). Add a function to turn a
string into a valid id, and convert our keys.
* Keep existing behaviour when not grouping
* When grouping, display a general summary containing complete number
of tests ran and duration on top of the page (after environment and
before links to specific sections).
* For each section, display number of test ran + the outcome checkboxes
to filter dipslayed test results.
@audricschiltknecht
Copy link
Author

Hello,

Sorry for not finding the time to work on this before.
I've updated the PR, including:

  • fixes to the issues found by @khornlund.
  • some documentation.
  • renamed the split-by to group-by, as this is more a grouping than a splitting.

I still need to add some tests.

@ssbarnea
Copy link
Member

I am looking forward to see this refreshed and also a link to sample report generated with it, it will make much easier to evaluate and review it.

@AmanKAggarwal
Copy link

Hey! I was wondering what is up with this feature? I want to generate pytest report grouped by each test file, but I did not find any documentation on group-by in the latest pytest-html version! Please let me know as soon as possible. If this feature is not supported by pytest-html, I would have to look for some other HTML report generation to cover my requirements

@BeyondEvil
Copy link
Contributor

Hey! I was wondering what is up with this feature? I want to generate pytest report grouped by each test file, but I did not find any documentation on group-by in the latest pytest-html version! Please let me know as soon as possible. If this feature is not supported by pytest-html, I would have to look for some other HTML report generation to cover my requirements

This feature is not available yet. It will be in a not so distant future, however I can't really give an exact time frame. Sorry.

@ssbarnea ssbarnea added the feature This issue/PR relates to a feature request. label Aug 24, 2020
@BeyondEvil
Copy link
Contributor

Would @audricschiltknecht be interested in the reviving this for v4?

@audricschiltknecht
Copy link
Author

Hello,
I completely forgot about that PR, thanks for reminding me.
I had a quick look at the changes for V4, and from my point of view, it's close to a full rewrite :)

I'll see what I can do, but I think I'll need some help as it has been some times since I've played with pytest-html.

@BeyondEvil
Copy link
Contributor

Hello, I completely forgot about that PR, thanks for reminding me. I had a quick look at the changes for V4, and from my point of view, it's close to a full rewrite :)

I'll see what I can do, but I think I'll need some help as it has been some times since I've played with pytest-html.

Yes, it's basically a full rewrite.

Happy to help!

@vikshaw-Nokia
Copy link

Hi All,
@audricschiltknecht @BeyondEvil
I desperately need this Feature
As My html files are in GBs as of now.

@RonnyPfannschmidt
Copy link
Member

What creates that size?

That particular scale may need a backing database

@vikshaw-Nokia
Copy link

vikshaw-Nokia commented Aug 21, 2024

@RonnyPfannschmidt
If you observe the Below Sample output
image

The Assertion failure and it trace is enough for me to debug,
But the stdcall is taking the a lot of memory
And when the TC runs for hours, It becomes quite heavy
a 50 Lines before and after the assertion would be enough for me to debug, but at the current point, I can't even open the HTML

@audricschiltknecht
Copy link
Author

Hi All, @audricschiltknecht @BeyondEvil I desperately need this Feature As My html files are in GBs as of now.

I had a look at it in the beginning of this year, but unfortunately, the code has changed too much and I basically would need to rewrite from scratch. As I don't work with python much these days, it has dropped out of my TODO-list.

I'm going to close that PR as I don't want to keep people's hope up by making promise that I cannot fulfill, but anyone is welcomed to use the code as a base if they need to.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature This issue/PR relates to a feature request.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants