-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to calculate the used GPU memory for each part as in the paper? #36
Comments
Hi, we use |
HI, @KaiLv69 I tried to use the scripts that you provided above. However, the code example you provided cannot run successfully. My example code is: from torch import nn
import torch
import profile, sys, threading
model = nn.Linear(20, 30).cuda()
criterion = nn.MSELoss().cuda()
memory_profiler = profile.CUDAMemoryProfiler(
[model, criterion],
filename='cuda_memory.profile'
)
sys.settrace(memory_profiler)
threading.settrace(memory_profiler)
inputs = torch.randn(1, 20, requires_grad=True).cuda()
output = model(inputs)
target = torch.ones(1, 30).cuda()
loss = criterion(output, target) then I run the python file by /home/tiger/.local/lib/python3.9/site-packages/torch/cuda/memory.py:416: FutureWarning: torch.cuda.memory_cached has been renamed to torch.cuda.memory_reserved
warnings.warn(
/home/tiger/.local/lib/python3.9/site-packages/torch/cuda/memory.py:424: FutureWarning: torch.cuda.max_memory_cached has been renamed to torch.cuda.max_memory_reserved
warnings.warn(
/home/tiger/.local/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py:293: UserWarning: torch.distributed.reduce_op is deprecated, please use torch.distributed.ReduceOp instead
warnings.warn(
Exception ignored in: <function Library.__del__ at 0x7fea20392550>
Traceback (most recent call last):
File "/home/tiger/.local/lib/python3.9/site-packages/torch/library.py", line 131, in __del__
File "/home/tiger/code/diffusers/examples/text_to_image/profile.py", line 134, in __call__
TypeError: 'NoneType' object is not callable Many people may not be familiar with the use of these tools, if you can give some more detailed examples we would appreciate it very much. |
Hi @QipengGuo @KaiLv69 @ayyyq
Thanks for the nice work, I am wondering how to calculate the detailed used GPU memory as illustrated in the paper, such as the results in the Table 1. What tools did you use for the calculations?
The text was updated successfully, but these errors were encountered: