-
-
Notifications
You must be signed in to change notification settings - Fork 31.9k
Enhance the timeit module: display average +- std dev instead of minimum #72427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Attached patch makes different changes to the timeit module:
I consider that these changes are well contained enough to still be ok for 3.6 beta 2. But I add Ned Deily as CC to double check ;-) This patch is related to my work on Python benchmarks, see: The perf module runs benchmarks in multiple child processes to test different memory layouts (Linux uses ASRL by default) and different hash functions. It helps to get more stable benchmark results, but it's probably overkill for the tiny timeit module. By the way, the "pyperf timeit" command reuses the timeit module of the stdlib. Note: The timeit module still uses the old getopt module which is very strict. For example "python3 -m timeit pass -v" is not recognized ("-v" is read as a statement part of the benchmark, not as --verbose). But I was too lazy to also modify this part, I may do it later ;-) |
The perf module displays the median rather than the mean (arithmeric average). The difference between median and mean is probably too subtle for most users :-/ The term "median" is also probably unknown by most users... I chose to use average: well known, easy to understand (we learn the formula at school: sum/count). |
Rationale: |
That entirely depends on which benchmark you are running.
timeit was never meant to benchmark real applications, it is a micro-benchmarking tool. |
Maciej Fijalkowski also sent me the following article a few months ago, it also explains indirectly why using the minimum for benchmarks is not reliable: "Virtual Machine Warmup Blows Hot and Cold" Even if the article is more focused on JIT compilers, it shows that benchmarks are not straightforward but always full of bad surprises. A benchmark doesn't have a single value but a *distribution*. The best question is how to summarize the full distribution without loosing too much information. In the perf module I decided to not take a decision: a JSON file stores *all* data :-D But by default, perf displays mean +- std dev. |
Another point: timeit is often used to compare performance between Python versions. By changing the behaviour of timeit in a given Python version, you'll make it more difficult to compare results. |
I'm still not convinced that the average is the right statistic to use Fundamentally, the problem with taking an average is that the timing If the errors are evenly divided into positive and negative, then on Unless you know that average error is tiny compared to T, I don't think
I nearly always run with repeat=5, so I agree with this.
But that's just adding noise: you're not timing code snippet, you're I disagree with this change, although I would accept it if there was an
That seems reasonable.
Shouldn't we use 10**6 or 1e6 rather than bitwise XOR? :-) This is aimed at Python programmers. We expect ** to mean
Seems reasonable. |
Hum, that's a good argument against my change :-) So to be able to compare Python 3.5 vs 3.6 or Python 2.7 vs Python 3.6, we need to backport somehow the average feature to the timeit module of older Python versions. One option would be to put the timeit module on the Python Cheeseshop (PyPI). Hum, but there is already such module: my perf module. A solution would be to redirect users to the perf module in the timeit documentation, and maybe also document that timeit results are not reliable? A different solution would be to add a --python parameter to timeit to run the benchmark on a specific Python version (ex: "python3 -m timeit --python=python2 ..."). But this solution is more complex to be developed since we have to make the timeit.py compatible with Python 2.7 and find a reliable way to load it in the other tested Python program. Note: I plan to add a --python parameter in my perf module, but I didn't implemented yet. Since my perf module spawn child processes and the perf module is a third party module, it is simpler to implement this option. -- A more general remark: the timeit is commonly used to compare performances of two Python versions. They run timeit twice and then compare manually results. But only two numbers are compared. It would be more reliable to compare all timings and make sure that the comparison is significant. Again, the perf module implements such function: I didn't implement a full CLI for perf timeit to directly compares two Python versions. You have to run timeit twice and store all timings in JSON files and then use the "perf compare" command to reload timings and compare them. |
This makes hard to compare results with older Python versions.
For now default timeit run takes from 0.8 to 8 sec. Adding yet 5 sec makes a user more angry.
But this makes short microbenchmarks less stable.
This is good if you run relatively slow benchmark, but it makes the result less reliable. You always can specify -n1, but on your own risk.
10^6 syntax doesn't look Pythonic. And this change breaks third-party scripts that run timeit.
Even "pass" takes at least 0.02 usec on my computer. What you want to measure that takes < 1 ns? I think timeit is just wrong tool for this. The patch also makes a warning about unreliable results output to stdout and always visible. This is yet one compatibility break. Current code allows the user to control the visibility of the warning by the -W Python option, and don't mix the warning with result output. |
Serhiy Storchaka added the comment:
Ah yes, I forgot that timeit uses power of 10 to have a nice looking
Do you mean scripts parsing the timeit output (stdout)?
IMO 20 ns is more readable than 0.02 usec.
Even if timeit is not reliable, it *is* used to benchmark operations
Oh, I forgot to documen this change. I made it because the old code |
We had a similar discussion a while back for pybench. Consensus https://mail.python.org/pipermail/python-dev/2006-June/065525.html I had used the average before this discussion in pybench 1.0: https://mail.python.org/pipermail/python-announce-list/2001-November/001081.html There are arguments both pro and con using min or avg values. I'd suggest to display all values and base the findings on |
If we're going to go down that path, I suggest using something like: https://en.wikipedia.org/wiki/Five-number_summary But at this point, we're surely looking at 3.7? |
Marc-Andre: "Consensus then was to use the minimum as basis for benchmarking: (...) There are arguments both pro and con using min or avg values." To be honest, I expect that most developer are already aware that minimum is evil and so I wouldn't have to convince you. I already posted two links for the rationale. It seems that you are not convinced yet, it seems like I have to prepare a better rationale :-) Quick rationale: the purpose of displaying the average rather than the minimum in timeit is to make timeit more *reliable*. My goal is that running timeit 5 times would give exactly the same result. With the current design of timeit (only run one process), it's just impossible (for different reasons listed in my first article linked on this issue). But displaying the average is less worse than displaying the minimum to make results more reproductible. |
Le 23/09/2016 à 10:21, STINNER Victor a écrit :
Why would it? System noise can vary from run to run. |
I'm not sure I follow. The first link clearly says that "So for better or worse, the choice of which one is better comes down to what we think the underlying distribution will be like." and it ends with "So personally I use the minimum when I benchmark.". http://blog.kevmod.com/2016/06/benchmarking-minimum-vs-average/ If we display all available numbers, people who run timeit can then see where things vary and possibly look deeper to find the reason. As I said and the above articles also underlines: there are cases where min is better and others where avg is better. So in the end, having both numbers available gives you all the relevant information. I have focused on average in pybench 1.0 and then switched to minimum for pybench 2.0. Using minimum resulted in more reproducible results at least on the computers I ran pybench on, but do note that pybench 2.0 still does print out the average values as well. The latter mostly due to some test runs I found where (probably due to CPU timers not working correctly), the min value sometimes dropped to very low values which did not really make sense compared to the average values. |
I concur with all of Mark-Andre's comments. FWIW, when I do timings with the existing timeit, I use a repeat of 7. It gives stable and consistent results. The whole idea of using the minimum of those runs is to pick the least noisy measurement. |
Until we have a consensus on this change and a final, reviewed patch, it is premature to consider inclusion in 3.6. If there is such prior to 360b2, we can reconsider. |
Oh, cfbolz just modified timeit in PyPy to display average (mean) and standard deviation: Moreover, PyPy timeit now displays the following warning: |
New changeset 3aba5552b976 by Victor Stinner in branch 'default': New changeset 2dafb2f3e7ff by Victor Stinner in branch 'default': |
New changeset 975df4c13db6 by Victor Stinner in branch 'default': |
New changeset 4d611957732b by Victor Stinner in branch 'default': New changeset c3a93069111d by Victor Stinner in branch 'default': New changeset 40e97c9dae7a by Victor Stinner in branch 'default': |
Steven D'Aprano:
IMO it's a lie to display the minimum timing with the garbage collector disabled. The garbage collector is enabled in all applications. |
Steven D'Aprano:
Hum, with "10**6" syntax, I see a risk of typo: "10*6" instead of "10**6". I don't know if the x^y syntax is common or not, but I like it. LaTeX uses it for example. |
See the issue bpo-28469 "timeit: use powers of 2 in autorange(), instead of powers of 10" for a simple fix to reduce the total duration of the worst case (reduce it to 2.4 seconds). |
Serhiy Storchaka:
Sorry, I don't understand how running 1 iteration instead of 10 makes the benchmark less reliable. IMO the reliability is more impacted by the number of repeatitions (-r). I changed the default from 3 to 5 repetitions, so timeit should be *more* reliable in Python 3.7 than 3.6.
It's just a matter of formatting. IMO clocks have a precision good enough to display nanoseconds when the benchmark uses many iterations (which is the case by default since autorange uses a minimum of 200 ms per benchmark). Before: $ python3.6 -m timeit 'pass'
100000000 loops, best of 3: 0.0339 usec per loop After: $ python3.7 -m timeit 'pass'
10000000 loops, best of 5: 33.9 nsec per loop IMO "33.9" is more readable than "0.0339". |
I'm disappointed by the discussion on minumum vs average. Using the perf module (python3 -m perf timeit), it's very easy to show that the average is more reliable than the minimum. The perf module runs 20 worker processes by default: with so many processes, it's easy to see that each process has a different timing because of random address space layout and the randomized Python hash function. Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-09-21 15:17 Serhiy: "This makes hard to compare results with older Python versions." Serhiy is right. I see two options: display average _and_ minimum (which can be confusing for users!) or display the same warning than PyPy: "WARNING: timeit is a very unreliable tool. use perf or something else for real measurements" But since I'm grumpy now, I will now just close the issue :-) I pushed enough changes to timeit for today ;-) |
Caches. Not high-level caching that can make the measurement senseless, but low-level caching, for example memory caching, that can cause small difference (but this difference can be larger than the effect that you measure). On every repetition you first run a setup code, and then run testing code in loops. After the first loop the memory cache is filled with used data and next loops can be faster. On next repetition running a setup code can unload this data from the memory cache, and the next loop will need to load it back from slow memory. Thus on every repetition the first loop is slower that the followings. If you run 10 or 100 loops the difference can be negligible, but if run the only one loop, the result can differs on 10% or more.
This is a senseless example. 0.0339 usec is not a time of executing "pass", it is an overhead of the iteration. You can't use timeit for measuring the performance of the code that takes such small time. You just can't get the reliable result for it. Even for code that takes an order larger time the result is not very reliable. Thus no need to worry about timing much less than 1 usec. |
Serhiy Storchaka:
I will not argue about the reliability of the timeit module. It's common to see code snippets using timeit for short |
It may be worth to emit a warning in case of using timeit for too short code. |
Serhiy: "It may be worth to emit a warning in case of using timeit for too short code." I suggest to simply warn users that timeit is not reliable at all :-) By the way, I close this issue, so I suggest you to open new issues if you want further enhancements. |
Serhiy Storchaka added the comment:
It seems like you give a time budget of less than 20 seconds to timeit Well, I'm not really interested by timeit in the stdlib anymore since |
New changeset 4e4d4e9183f5 by Victor Stinner in branch 'default': |
Wait, I didn't notice the change to the format of raw timings. It looks as a regression to me. |
Do you mean that some applications may run timeit as a CLI and parse stdout to get raw values? Why doing so? timeit is a Python module, it's trivial to use its API to avoid using the CLI, no? I don't think that the CLI output must not change. master branch: vstinner@apu$ ./python -m timeit -v '[1,2]*1000' raw times: 310 msec, 313 msec, 308 msec, 303 msec, 304 msec 20000 loops, best of 5: 15.2 usec per loop Python 3.6: vstinner@apu$ python3 -m timeit -v '[1,2]*1000' Hum, the timings of the calibration (xx loops -> ...) should use the same function to format time to use ns, ms, etc. |
Yes, it was my thought. But seems you are right, it is easier to use Python as a programming language. In the past I used the CLI because the programming interface didn't supported autoranging. Although I would change the human-readable output to raw times (msec): 310 313 308 303 304 But it may be too later for 3.7. |
I updated the documentation in 3.7 and master branches. |
Misc/NEWS
so that it is managed by towncrier #552Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: