How is the execution configuration different in Warp than CUDA? #78
AkeelMedina22
started this conversation in
General
Replies: 1 comment 2 replies
-
Hi @AkeelMedina22 , Warp takes care of decomposing kernel thread grids into blocks for you - we currently use a fixed block dimension of 256 threads (regardless of single / multi-dimension launches). If we were to expose a way set the block dimensions inside |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
I'm a beginner to CUDA and Warp, and I was wondering how concepts such as Blocks and Threads work in Warp. For example, if I wanted to do a simple dot product, in CUDA I would use several blocks with an ideal number of threads as per the occupancy calculator. Would launching a multidimensional grid be appropriate to mimic this? Or does this happen automatically with a 1-dimensional execution configuration?
Beta Was this translation helpful? Give feedback.
All reactions