Skip to content

Chained hash pipelining in array hashing #58252

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

adienes
Copy link
Member

@adienes adienes commented Apr 28, 2025

the proposed switch in #57509 from 3h - hash_finalizer(x) to hash_finalizer(3h -x) should increase the hash quality of chained hashes, as the expanded expression goes from something like sum((-3)^k * hash(x) for k in ...) to a non-simplifiable composition

this does have the unfortunate impact of long chains of hashes getting a bit slower as there is more data dependency and the CPU cannot work on the next element's hash before combining the previous one (I think --- I'm not particularly an expert on this low level stuff). As far as I know this only really impacts AbstractArray

so, I've implemented a proposal that does some unrolling / pipelining manually to recover AbstractArray hashing performance. in fact, it's quite a lot faster now for most lengths. I tuned the thresholds (8 accumulators, certain length breakpoints) by hand on my own machine.

@adienes adienes added performance Must go faster hashing labels Apr 28, 2025
@adienes adienes mentioned this pull request Apr 28, 2025
@oscardssmith
Copy link
Member

show performance benchmarks and then lgtm.

@adienes
Copy link
Member Author

adienes commented Apr 28, 2025

#57509 (comment) the vec column contains timing; the data for this PR is under :commit == "pipeline" sorry I should have been more clear in that comment

graphically:

image
image

note that this cannot merge before #57509, which is also waiting on #58053

@adienes
Copy link
Member Author

adienes commented May 11, 2025

CI failure unrelated

@adienes
Copy link
Member Author

adienes commented May 15, 2025

should this method be used for big Tuples as well?

@oscardssmith
Copy link
Member

how does the Tuple perf look? if it only helps for 100 or more, I wouldn't bother. if it's useful in the 10-100 range, I think we should

@adienes
Copy link
Member Author

adienes commented May 16, 2025

eh, idt it helps. I guess tuples should mostly be hashing at compile time anyway

@adienes
Copy link
Member Author

adienes commented May 21, 2025

another question popped up: should this be extracted do a different function (like hash_array or something) with a wider signature, and then hash(::AbstractArray) forwards to it? I ask because there are a few types around the ecosystem that could in theory be duck-typed into this method just fine, but they do not subtype AbstractArray. see BioSequences.jl for one such example where hashing would be 5x faster than what's implemented in that package (with foldl) if it could use this method.

@oscardssmith
Copy link
Member

seems reasonable

h = hash(elt, h)
end
return h
elseif len < 65536
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if I'm understanding this right, this hashes every single element in arrays up to 65536, and then jumps to fib hashing, where approximately 4096*(log(65536/4096)+1) ≈ 15000 elements (at most) get hashed. I think. What's the discontinuity in perf there look like?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to choose the threshold to a power of 2 so that there's no (or as little) discontinuity as possible. but of course that will depend on the specifics. In my case, that was matching the performance on Vector{Int} on an M1. but for arrays of eltype that are much slower to hash than Int (e.g. more than let's say a few 10s of ns) it's certainly possible it would be better to choose a lower threshold. the problem is I don't know how to determine those specifics in advance, so I just picked what I assumed to be the most common use case. maybe a fun target for PGO!

Copy link
Member

@mbauman mbauman May 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cutoffs in the original fib algorithm here aren't highly tuned themselves; I pulled them out of thin air similarly. So this needn't be super highly tuned; the core goal is good enough. I was just curious as the plots above show this discontinuity on master (at 8192 elements) but cut off before the new 65536 discontinuity hits.

One option that might be quite cheap would be to hash chunks of 8 consecutive elements at a time, even within the fib algo. That is, do the @nexprs 8 i -> p_i = hash(A[n + i - 1], p_i) thing at every iteration, and then increasing n and the size of the fib skips by 8x. Then we could also just hash the key once per 8 elements, too. I suppose, though, that for most sparse matrices this would end up hashing one nonzero and seven zeros... and then effectively 8x fewer nonzeros would end up getting included in the hash (because skips would be 8x bigger).

But maybe a chunk size of 2 or 4 would be a good compromise. In fact, 2 would probably end up including the about the same number of nonzeros in the hash for most sparse matrices (since we're already bouncing between finding zeros and nonzeros).

n = 1
limit = len - 7
while n <= limit
@nexprs 8 i -> p_i = hash(A[n + i - 1], p_i)
Copy link
Member

@mbauman mbauman May 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is using linear indexing for all elements; I suspect it'll incur a pretty big perf hit for sparse and perhaps some other arrays. Sparse performance in the fib algorithm comes from both Cartesian indexing and a sparse specialization on the findprev(!isequal(elt), A, keyidx) search above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point. I'll take a look at some sparse benchmarks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants