Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding support for Where #117

Merged
merged 2 commits into from
Feb 5, 2025
Merged

Adding support for Where #117

merged 2 commits into from
Feb 5, 2025

Conversation

dstarkenburg
Copy link
Contributor

@dstarkenburg dstarkenburg commented Feb 5, 2025

Adding support for the Where layer of ONNX

PR Checklist

  • Tests are added
  • Documentation, if applicable

src/ops.jl Outdated
output[i] = y[i]
end
end
return output
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although this implementation works on CPU, most likely it won't (efficiently) work on GPU. How about using ifelse instead?

julia> ifelse.([true, false, true], [1, 1, 1], [2, 2, 2])
3-element Vector{Int64}:
 1
 2
 1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will go ahead and change this! Did my other implementations have similar issues? If so, should I go back and make sure there are no (or minimal) allocations in my implementations?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. Most Julia functions work equally well on CPU and GPU. Two things that are different for while are:

  1. Control flow on Julia level.
  2. Scalar indexing.

Both operations have bad performance on GPU due to their batch-first design.

Although not a problem for GPU per se, I'd also add to this list in-place update because they make quite some trouble for autodifferentiation frameworks.

@dfdx
Copy link
Collaborator

dfdx commented Feb 5, 2025

Now it looks good. Thank you!

@dfdx dfdx merged commit 6132a74 into FluxML:master Feb 5, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants