Switch from Global device
-config
to tensor
-wise configuration
#164
Labels
discussion
Open discussions
enhancement
New feature or request
good first issue
Good for newcomers
refactor
Change in the internal code structure but no change in functionality
As we now have almost all of the core functions implemented in
torch
operations that can utilize the GPU, we have fixed most runtime issues and now run into the next bottleneck namely memory (GPU-RAM). It thus might become necessary to give the user more control over what parts of aGraph
object should be stored on CPU or GPU. Although it is more convenient if this is controlled via a global configuration, it might become necessary to addto(device)
methods to enable batch-wise computations.The text was updated successfully, but these errors were encountered: