-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
user case on random images? #3
Comments
Hi, thank you for you issue. If your version <= 1.7.1, please use x, _ = torch.solve(ATB, ATA+jitter) |
Thanks, it fixed my problem! |
After I run your example, the following error occurs. Traceback (most recent call last): How can I solve this problem? Thank you! |
Hi, please add 'model.eval()' before 'pdepth = model.forward(img)' |
Thanks you, it fixed my problem! I saw on kitti's website that the code processes an image in 0.1 seconds, and the environment is configured at 1 [email protected] Ghz (Python). My environment is configured with pytorch-gpu v1.13.0, cuda 11.7, python v3.9.7, and it takes about 0.2 seconds to process an image. |
Hi, sorry for the confusion. In fact, I also use GPU. |
Thank you for your answer. |
Thank you for your check! Please try the following code: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = VADepthNet(max_depth=10, totensor = ToTensor('test') |
Hi there! I was trying to get your wondnerful model tested on a random image when I encountered this bug, could you please help me out there?
Below is the code to reproduce the error:
import torch
from vadepthnet.networks.vadepthnet import VADepthNet
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("device: %s" % device)
model = VADepthNet(max_depth=2,
prior_mean=.6,
img_size=(480, 640))
model = torch.nn.DataParallel(model)
checkpoint = torch.load('vadepthnet_nyu.pth', map_location=device)
model.load_state_dict(checkpoint['model'])
img = torch.rand(1,3,480,640).to(torch.float32)
pdepth = model.forward(img)
print(pdepth)
And the error message gives:
Traceback (most recent call last):
File "/data2/zq/VA-DepthNet/test_img.py", line 25, in
pdepth = model.forward(img)
File "/home/zq/micromamba/envs/zoe/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 169, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/zq/micromamba/envs/zoe/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data2/zq/VA-DepthNet/vadepthnet/networks/vadepthnet.py", line 302, in forward
d = self.vlayer(x)
File "/home/zq/micromamba/envs/zoe/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data2/zq/VA-DepthNet/vadepthnet/networks/vadepthnet.py", line 187, in forward
x, _ = torch.linalg.solve(ATB, ATA+jitter)
RuntimeError: linalg.solve: A must be batches of square matrices, but they are 1200 by 1 matrices
Could you add a user case on random image inference please? Thanks in advance!
The text was updated successfully, but these errors were encountered: