Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

output roi all zeros #25

Open
qchenclaire opened this issue Mar 11, 2019 · 4 comments
Open

output roi all zeros #25

qchenclaire opened this issue Mar 11, 2019 · 4 comments

Comments

@qchenclaire
Copy link

Hi I tried to test ROIAlign on images with rois = roi_align(detections, boxes, box_index)
detections' shape is torch.Size([1, 3, 271, 271]) and boxes looks like
tensor([[ 151.6779, 18.8237, 270.0000, 84.2876], [ 175.6971, 9.2199, 255.9987, 92.7847], [ 165.4188, 0.0000, 233.8400, 119.7061], [ 134.8676, 25.9375, 270.0000, 79.1737]], device='cuda:0')
and box_index looks like
tensor([ 0, 0, 0, 0], dtype=torch.int8, device='cuda:0')
The output shape is (4, 50, 50, 3), which means 4 cropped images.
But I got only the first image that looks
correct and the rest 3 are all zeros.
0
1
2
3

@longcw
Copy link
Owner

longcw commented Mar 11, 2019 via email

@qchenclaire
Copy link
Author

The box index should be IntTensor but not int8 Qi (Claire) Chen [email protected] 于 2019年3月12日周二 上午6:53写道:

Hi I tried to test ROIAlign on images with rois = roi_align(detections, boxes, box_index) detections' shape is torch.Size([1, 3, 271, 271]) and boxes looks like tensor([[ 151.6779, 18.8237, 270.0000, 84.2876], [ 175.6971, 9.2199, 255.9987, 92.7847], [ 165.4188, 0.0000, 233.8400, 119.7061], [ 134.8676, 25.9375, 270.0000, 79.1737]], device='cuda:0') and box_index looks like tensor([ 0, 0, 0, 0], dtype=torch.int8, device='cuda:0') The output shape is (4, 50, 50, 3), which means 4 cropped images. But I got only the first image that looks correct and the rest 3 are all zeros. [image: 0] https://user-images.githubusercontent.com/24658370/54163296-e86aab80-442e-11e9-9750-1a52b7338a17.jpg [image: 1] https://user-images.githubusercontent.com/24658370/54163297-e86aab80-442e-11e9-9abf-36d74aab23fc.jpg [image: 2] https://user-images.githubusercontent.com/24658370/54163298-e86aab80-442e-11e9-8d01-bf3d05136816.jpg [image: 3] https://user-images.githubusercontent.com/24658370/54163299-e86aab80-442e-11e9-91a2-c23f3c540022.jpg — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#25>, or mute the thread https://github.com/notifications/unsubscribe-auth/AF6UgJr-CGb8Oaqx4q6-12nSPxDwuj9sks5vVt5lgaJpZM4bpuTJ .

After I changed to torch.int, that works! Thanks!

@zxyyxzz
Copy link

zxyyxzz commented Mar 28, 2021

The box index should be IntTensor but not int8 Qi (Claire) Chen [email protected] 于 2019年3月12日周二 上午6:53写道:

Hi I tried to test ROIAlign on images with rois = roi_align(detections, boxes, box_index) detections' shape is torch.Size([1, 3, 271, 271]) and boxes looks like tensor([[ 151.6779, 18.8237, 270.0000, 84.2876], [ 175.6971, 9.2199, 255.9987, 92.7847], [ 165.4188, 0.0000, 233.8400, 119.7061], [ 134.8676, 25.9375, 270.0000, 79.1737]], device='cuda:0') and box_index looks like tensor([ 0, 0, 0, 0], dtype=torch.int8, device='cuda:0') The output shape is (4, 50, 50, 3), which means 4 cropped images. But I got only the first image that looks correct and the rest 3 are all zeros. [image: 0] https://user-images.githubusercontent.com/24658370/54163296-e86aab80-442e-11e9-9750-1a52b7338a17.jpg [image: 1] https://user-images.githubusercontent.com/24658370/54163297-e86aab80-442e-11e9-9abf-36d74aab23fc.jpg [image: 2] https://user-images.githubusercontent.com/24658370/54163298-e86aab80-442e-11e9-8d01-bf3d05136816.jpg [image: 3] https://user-images.githubusercontent.com/24658370/54163299-e86aab80-442e-11e9-91a2-c23f3c540022.jpg — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#25>, or mute the thread https://github.com/notifications/unsubscribe-auth/AF6UgJr-CGb8Oaqx4q6-12nSPxDwuj9sks5vVt5lgaJpZM4bpuTJ .

Hi :
hello, i am using roi_align to do some works, but i also meet the problem:
the code is below:

`
import torch
from torchvision.ops import RoIAlign, RoIPool, roi_align
import numpy as np

output_size = (7,7)
spatial_scale = 1/4
sampling_ratio = 2

x = torch.zeros((1,1,117,117), dtype=torch.float)
x[:,:,53:89,88:102]=100.0

rois = torch.tensor([
    # [0,0.0,0.0,0.0,0.0],
    [0.0, 53.0, 89.0, 88.0, 102.0],
])
count = torch.tensor(0).view(1)


ya = roi_align(x, rois, output_size, sampling_ratio=1)
print(ya)

`

the out put is:
tensor([[[[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.]]]])

can u help me?

@cswaynecool
Copy link

I met the same problem, this happens when more than half of the video memory is used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants