perf: cache InvocationKey hash values at construction time#105
perf: cache InvocationKey hash values at construction time#105arirubinstein wants to merge 1 commit intoopen-policy-agent:mainfrom
Conversation
5a27693 to
666fc8a
Compare
e79d825 to
59d1916
Compare
59d1916 to
ac8d5aa
Compare
|
Interestingly, this together with the other hash precompute patch does result in small regression our benchmarks, in particular array iterations slow down quite a bit, whereas collection building improves a bit. Would you have a benchmark to add where this patch shines? |
|
I believe the benefit of this PR in isolation is still valuable, however I'm going to re-write the implementation for #109 to use ordinal hashing instead, which should cut the allocation cost a bit. In addition, I will add a new benchmark to cover Memoization - Overlapping Rules, which should light that path up a bit more in the benchmark |
ac8d5aa to
5ec7e3c
Compare
Signed-off-by: Ari Rubinstein <arirubinstein@users.noreply.github.com>
5ec7e3c to
b64bb5b
Compare
|
@arirubinstein I've assigned myself on this PR so that it won't go away on my notifications until it closes or merges. I'll see about getting a review pass in this afternoon. |
philipaconrad
left a comment
There was a problem hiding this comment.
Thanks @arirubinstein for putting this PR together! 😄
I greatly appreciate the test cases (makes it easier to know things work as intended), and I agree with dropping the custom Equatable implementation-- it's subsumed by the Hashable implementation that's already there.
|
@koponen Could you take a look at this PR, and see if we still need it? |
It seems less critical now as the current version of this structure has merely three (u)int32s in it. Let's re-evaluate if the hashing still pops up. |
|
Thanks @koponen, that makes sense. Thanks @arirubinstein for putting together the original PR, but I think we'll close this one for the time being, and resurrect it if hashing becomes an issue again in profiles. |
What code changed, and why?
Pre-compute and cache the hash value for
InvocationKeywhen it isconstructed, rather than re-hashing funcName and args on every
dictionary lookup.
Why:
InvocationKeyis used as the key for the memoization cache(
MemoCache). Each memo lookup callshash(into:), which traversesthe
funcNamestring and the entire args array of Operands. SinceInvocationKeyis immutable (all let fields) and typically looked upmultiple times per construction, hashing the same data repeatedly is
pure overhead.
By computing the hash once in init() and returning it from
hash(into:), dictionary lookups become O(1) integer operationsinstead of O(n) string+array traversals.