Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not working in half precision #6

Open
tetelias opened this issue May 17, 2021 · 3 comments
Open

Not working in half precision #6

tetelias opened this issue May 17, 2021 · 3 comments

Comments

@tetelias
Copy link

tetelias commented May 17, 2021

Installed correctly, grad_check runs without errors as does the sample code. When trying to use either native torch amp or original NVidia one, I receive the same error:

  File "/home/someone/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/someone/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 117, in forward
    input = module(input)
  File "/home/someone/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/someone/anaconda3/lib/python3.7/site-packages/carafe/carafe.py", line 250, in forward
    x = self.feature_reassemble(x, mask)
  File "/home/someone/anaconda3/lib/python3.7/site-packages/carafe/carafe.py", line 242, in feature_reassemble
    x = carafe(x, mask, self.up_kernel, self.up_group, self.scale_factor)
  File "/home/someone/anaconda3/lib/python3.7/site-packages/carafe/carafe.py", line 114, in forward
    group_size, scale_factor, routput, output)
RuntimeError: expected scalar type Half but found Float
@Joker9194
Copy link

I had the same question, do you solve it?

@Joker9194
Copy link

Joker9194 commented Nov 20, 2021

I have solved it, because the type of output is torch.float16, but the type of masks and rmasks is torch.float 32, so transform the type

if features.is_cuda: masks = masks.type(torch.half) rmasks = rmasks.type(torch.half) carafe_ext.forward(features, rfeatures, masks, rmasks, kernel_size, group_size, scale_factor, routput, output)

@Joker9194
Copy link

I update the code for the issue, it worked for me.

        if features.is_cuda:
            if features.type() == 'torch.cuda.HalfTensor':
                masks = masks.type(torch.half)
                rmasks = rmasks.type(torch.half)
            carafe_ext.forward(features, rfeatures, masks, rmasks, kernel_size,
                               group_size, scale_factor, routput, output)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants