Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch from Global device-config to tensor-wise configuration #164

Open
M-Lampert opened this issue Apr 10, 2024 · 0 comments · May be fixed by #187
Open

Switch from Global device-config to tensor-wise configuration #164

M-Lampert opened this issue Apr 10, 2024 · 0 comments · May be fixed by #187
Assignees
Labels
discussion Open discussions enhancement New feature or request good first issue Good for newcomers refactor Change in the internal code structure but no change in functionality

Comments

@M-Lampert
Copy link
Contributor

As we now have almost all of the core functions implemented in torch operations that can utilize the GPU, we have fixed most runtime issues and now run into the next bottleneck namely memory (GPU-RAM). It thus might become necessary to give the user more control over what parts of a Graph object should be stored on CPU or GPU. Although it is more convenient if this is controlled via a global configuration, it might become necessary to add to(device) methods to enable batch-wise computations.

@M-Lampert M-Lampert added enhancement New feature or request discussion Open discussions refactor Change in the internal code structure but no change in functionality labels Apr 10, 2024
@IngoScholtes IngoScholtes added the good first issue Good for newcomers label May 15, 2024
@jvpichowski jvpichowski linked a pull request May 23, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Open discussions enhancement New feature or request good first issue Good for newcomers refactor Change in the internal code structure but no change in functionality
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants