gh-131525: Remove _HashedSeq
wrapper from lru_cache
#131922
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
As suggested in #131525 (comment) this PR removes
_HashedSeq
wrapper from the Python-onlyfunctools.lru_cache
implementation. Since tuple hashes are now cached (#131529) this wrapper is not needed anymore.I ran some very quick benchmarks with the following ipython script to check whether this change has an impact on the performance:
In this toy benchmark with trivial hashes this change removes some overhead
main
_HashedSeq
(this PR)In an example where hashes are artificially slow this change would've caused a regression before #131529 but now performs equally well (4.14 s without tuple hash cashing vs 1.03 s with hash caching).
I don't think a news entry is necessary since this only affects the Python only implementation, but let me know if I should still add one.