Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

user case on random images? #3

Open
fatginger1024 opened this issue Jul 18, 2023 · 8 comments
Open

user case on random images? #3

fatginger1024 opened this issue Jul 18, 2023 · 8 comments

Comments

@fatginger1024
Copy link

Hi there! I was trying to get your wondnerful model tested on a random image when I encountered this bug, could you please help me out there?
Below is the code to reproduce the error:

import torch
from vadepthnet.networks.vadepthnet import VADepthNet
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("device: %s" % device)
model = VADepthNet(max_depth=2,
prior_mean=.6,
img_size=(480, 640))
model = torch.nn.DataParallel(model)
checkpoint = torch.load('vadepthnet_nyu.pth', map_location=device)
model.load_state_dict(checkpoint['model'])
img = torch.rand(1,3,480,640).to(torch.float32)
pdepth = model.forward(img)
print(pdepth)

And the error message gives:

Traceback (most recent call last):
File "/data2/zq/VA-DepthNet/test_img.py", line 25, in
pdepth = model.forward(img)
File "/home/zq/micromamba/envs/zoe/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 169, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/zq/micromamba/envs/zoe/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data2/zq/VA-DepthNet/vadepthnet/networks/vadepthnet.py", line 302, in forward
d = self.vlayer(x)
File "/home/zq/micromamba/envs/zoe/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data2/zq/VA-DepthNet/vadepthnet/networks/vadepthnet.py", line 187, in forward
x, _ = torch.linalg.solve(ATB, ATA+jitter)
RuntimeError: linalg.solve: A must be batches of square matrices, but they are 1200 by 1 matrices

Could you add a user case on random image inference please? Thanks in advance!

@cnexah
Copy link
Owner

cnexah commented Jul 18, 2023

Hi, thank you for you issue.
I think the problem is the version of PyTorch.
In line:
https://github.com/cnexah/VA-DepthNet/blob/44061e1f5833eb59835b429850ff759b6bc23648/vadepthnet/networks/vadepthnet.py#L186C16-L186C21

If your version <= 1.7.1, please use x, _ = torch.solve(ATB, ATA+jitter)
If your version >= 1.8.0, please use x = torch.linalg.solve(ATA+jitter, ATB)

@fatginger1024
Copy link
Author

Thanks, it fixed my problem!

@dasda-asd
Copy link

Hi there! I was trying to get your wondnerful model tested on a random image when I encountered this bug, could you please help me out there? Below is the code to reproduce the error: import torch from vadepthnet.networks.vadepthnet import VADepthNet device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print("device: %s" % device) model = VADepthNet(max_depth=2, prior_mean=.6, img_size=(480, 640)) model = torch.nn.DataParallel(model) checkpoint = torch.load('vadepthnet_nyu.pth', map_location=device) model.load_state_dict(checkpoint['model']) img = torch.rand(1,3,480,640).to(torch.float32) pdepth = model.forward(img) print(pdepth) And the error message gives: Traceback (most recent call last): File "/data2/zq/VA-DepthNet/test_img.py", line 25, in pdepth = model.forward(img) File "/home/zq/micromamba/envs/zoe/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 169, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/zq/micromamba/envs/zoe/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/data2/zq/VA-DepthNet/vadepthnet/networks/vadepthnet.py", line 302, in forward d = self.vlayer(x) File "/home/zq/micromamba/envs/zoe/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/data2/zq/VA-DepthNet/vadepthnet/networks/vadepthnet.py", line 187, in forward x, _ = torch.linalg.solve(ATB, ATA+jitter) RuntimeError: linalg.solve: A must be batches of square matrices, but they are 1200 by 1 matrices Could you add a user case on random image inference please? Thanks in advance!

After I run your example, the following error occurs.

Traceback (most recent call last):
File "D:\xiaohe\VA-depthnet\VA-DepthNet-main\test.py", line 12, in
pdepth = model.forward(img)
File "D:\xiaohe\anaconda\envs\zoedepth\lib\site-packages\torch\nn\parallel\data_parallel.py", line 169, in forward
return self.module(*inputs[0], **kwargs[0])
File "D:\xiaohe\anaconda\envs\zoedepth\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "D:\xiaohe\VA-depthnet\VA-DepthNet-main\vadepthnet\networks\vadepthnet.py", line 306, in forward
var_loss = self.var_loss(x, d, gts)
File "D:\xiaohe\anaconda\envs\zoedepth\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "D:\xiaohe\VA-depthnet\VA-DepthNet-main\vadepthnet\networks\loss.py", line 38, in forward
loss = loss + self.single(x, d, gts)
File "D:\xiaohe\VA-depthnet\VA-DepthNet-main\vadepthnet\networks\loss.py", line 45, in single
gt = gts.clone()
AttributeError: 'NoneType' object has no attribute 'clone'

How can I solve this problem? Thank you!

@cnexah
Copy link
Owner

cnexah commented Sep 21, 2023

Hi, please add 'model.eval()' before 'pdepth = model.forward(img)'

@dasda-asd
Copy link

Hi, please add 'model.eval()' before 'pdepth = model.forward(img)'

Thanks you, it fixed my problem!

I saw on kitti's website that the code processes an image in 0.1 seconds, and the environment is configured at 1 [email protected] Ghz (Python). My environment is configured with pytorch-gpu v1.13.0, cuda 11.7, python v3.9.7, and it takes about 0.2 seconds to process an image.
How can I reduce the time it takes to process an image? It's important to me.
Can you describe the details of the environment configuration to 1 core @ 2.5 Ghz (Python)?

@cnexah
Copy link
Owner

cnexah commented Sep 21, 2023

Hi, sorry for the confusion. In fact, I also use GPU.
I think the running time depends on your GPU type.
Otherwise I would suggest to try smaller network or use mixed precision at evaluation.

@dasda-asd
Copy link

Hi, sorry for the confusion. In fact, I also use GPU. I think the running time depends on your GPU type. Otherwise I would suggest to try smaller network or use mixed precision at evaluation.

Thank you for your answer.
Can you give us an example code to test a image(480 x 640 pixels)? I am a beginner and I am very interested in your thesis work.

@cnexah
Copy link
Owner

cnexah commented Sep 26, 2023

Thank you for your check!
Sorry for the mistake again!

Please try the following code:
import torch
from PIL import Image
import numpy as np
from vadepthnet.networks.vadepthnet import VADepthNet
from vadepthnet.dataloaders.dataloader import ToTensor

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("device: %s" % device)

model = VADepthNet(max_depth=10,
prior_mean=1.54,
img_size=(480, 640))
model = torch.nn.DataParallel(model)
checkpoint = torch.load('vadepthnet_nyu.pth', map_location=device)
model.load_state_dict(checkpoint['model'])
model.eval()
img = Image.open(image_path)
img = np.asarray(img, dtype=np.float32) / 255.0
#img = torch.from_numpy(img).cuda().unsqueeze(0)

totensor = ToTensor('test')
img = totensor.to_tensor(img)
img = totensor.normalize(img)
img = img.unsqueeze(0)
pdepth = model.forward(img)
print(pdepth)

Repository owner deleted a comment from dasda-asd Sep 26, 2023
Repository owner deleted a comment from dasda-asd Sep 26, 2023
Repository owner deleted a comment from dasda-asd Sep 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants