-
-
Notifications
You must be signed in to change notification settings - Fork 31.8k
test_peg_generator takes 8 minutes on Windows #90682
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
test_peg_generator takes between 5 and 16 minutes on the Windows CI run on Python pull requests. Example of timings on my PR #30890
Would it be possible to make these tests faster? Or at least, would it be possible to skip these slow tests on Windows where spawing a process or running a C compiler is slower? On my Fedora 35 (Linux) laptop, test_peg_generator takes 49.5 seconds. test_peg_generator only takes 736 ms if I disable TestCParser. |
The test needs to build a lot of C extensions with different parsers, and that compilation is what takes most of the time. I don't think we should skip these tests by default on Windows, as it gives us valuable information (that the parser features work compiled on windows). If you have an idea on how to speed the tests, I'm all ears :) |
As a general thing, I expect people on Windows always run the tests with multiple processes. In which case it would be generally helpful to start the longest-running tests first. As is, because of its late-in-the-alphabet name, "test_peg_generator" gets started late in the process and so typically keeps running for minutes and minutes after all other tests have completed. So just renaming it to "test_0peg_generator" would cut minutes off the Windows wall clock time ;-) |
Currently, most CI run "make buildbottest" which uses -r option of libregrtest: randomize tests order. |
Would it be possible to skip these tests when no changes to the parser related code are made to speed up tests? |
Maybe, some CIs could export an environment variable which contains the list of modified files, and then each file would be able to decide if it should be skipped or not. The risk is skipping a test because the change looks "unrelated", whereas it causes the test to fail. Currently, we only skip tests if no code is changed: if only the documentation is modified. |
Another solution would be to shard the tests on windows i.e. run tests on two different jobs concurrently, edgedb does this. |
It's plenty slow on linux as well. I like the idea of starting the slower tests first. |
On Tue, Jan 25, 2022 at 4:14 PM STINNER Victor <report@bugs.python.org> wrote:
How hard would it be to first randomize the list and then move the |
Do all of the tests use all of the slowly built extensions, or could the test file be split into multiple files run separately, each faster? |
See also https://bugs.python.org/issue46576 and #31015 |
re: slow tests in the first half of the list. the same total amount of time is going to be spent regardless. In our test suite on a modern fast 16 thread system, all but 10 tests are completed in parallel within the first 30 seconds. The remaining ~10 take 10x+ that wall time more minutes. So the most latency you will shave off on a modern system is probably <30 seconds. On a slower system the magnitude of that will remain the same in proportion. CI systems are not workstations. On -j1 or -j2 system I doubt it will make a meaningful difference at all. Picture test execution as a utilization graph:
The total area under that curve is going to remain the same no matter what so long as we execute everything. Reordering the tests can pull the final long tail in a bit by pushing out the top layer. You move more towards an optimal rectangle, but you're still limited by the area. **The less -jN parallelism you have as CPU cores the less difference any reordering change makes.** What actual parallelism do our Github CI systems offer? The fundamental problem is that we do a LOT in our test suite and have no concept of what depends on what and thus _needs_ to be run. So we run it all. For specialized tests like test_peg_generator and test_tools it should be easy to determine from a list of modified files if those tests are relevant. That gets a lot more complicated to accurately express for things like test_multiprocessing and test_concurrent_futures. test_peg_generator and test_tools are also *packages of tests* that themselves should be parallelized individually instead of considered a single serialized unit. At work we even shard test methods within TestCase classes so that big ones can be split across test executor tasks: See the _setup_sharding() function in absltest here: https://github.com/abseil/abseil-py/blob/main/absl/testing/absltest.py#L2368 In absence of implementing an approach like that within test.regrtest to shard at a more granular level thus enabling us to approach the golden rectangle of optimal parallel test latency, we're left with manually splitting long running test module/packages up into smaller units to achieve a similar effect. |
If a decent parallelism CI systems are not available from github (they seem stuck at 2-3 threads per https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners), an alternative approach could be to shard across multiple parallel CI tasks. Shard such that each one gets only one of the slow tests. Unfortunately if each of these were a line item in Github's poor CI UI sharding a single config's tests across 5-10 tasks would be a nightmare to navigate on a PR. I expect everyone would hate that. Providing our own runners with decent parallelism could help: https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners But we're always going to be bound by our longest tail test if we don't fix our test parallelism to be more granular. |
test_peg_generator is significantly less of the long tail on optimized builds now. CI extremely noisy performance wise (times vary by 2-3x without any differences anways) so I can't easily judge if this made a notable difference in windows CI latency. |
Duration of the "Tests" step of #30890
Duration of the "Tests" step of #31096 (which includes Gregory's change):
test_peg_generator is way faster, and the total tests duration is shorter especially the maximum time: 21 min (before) => 10 min (after). |
Thanks to Kumar for contributing Windows compiler flags side of this (the point of this issue). |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: