Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scalar indexing for Base.hash #588

Open
nHackel opened this issue Mar 1, 2025 · 1 comment
Open

Scalar indexing for Base.hash #588

nHackel opened this issue Mar 1, 2025 · 1 comment

Comments

@nHackel
Copy link

nHackel commented Mar 1, 2025

Hello, I've recently noticed that GPUArrays don't define a Base.hash implementation and fallback to the default one. This requires one to @allowscalar which is slow and also means one has to differentiate between CPU and GPU arrays when calling hash.

MWE:

using GPUArrays, CUDA

A = cu(rand(1024, 1024))

hash(A) # errors

@allowscalar Base.hash(A) # slow

I'm not sure what a good implementation for GPU arrays would be. An inefficient GPU default could be:

Base.hash(arr::T, h) where T <: AbstractGPUArray = mapreduce(hash, hash, arr; init = hash(T, h))

That of course touches every element and works with UInt64 values, but it would be faster than the normal default.

From what I can tell the default Base.hash for arrays is accessing O(log n) elements. I'm not sure how to neatly map such a pattern onto GPUs, if someone has any pointers I'd be happy to implement it

@maleadt
Copy link
Member

maleadt commented Mar 3, 2025

I don't think that fallback would work; CUDA.jl's mapreduce executes in a nondeterministic order, requiring an associative and commutative operator, which Base.hash isn't.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants