You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
while encode image to the latent space using
latent = vae.encode(tfms.ToTensor()(input_im).unsqueeze(0).to(torch.float16).to(torch_device)*2-1)
it gave error RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same
As my graphics card 8gb I converted vae to torch.float16. Is that the problem.
the whole error is---
RuntimeError Traceback (most recent call last)
Cell In[20], line 2
1 # Encode to the latent space
----> 2 encoded = pil_to_latent(input_image)
3 encoded.shape
4 # Let's visualize the four channels of this latent representation:
Cell In[18], line 4, in pil_to_latent(input_im)
1 def pil_to_latent(input_im):
2 # Single image -> single latent in a batch (so size 1, 4, 64, 64)
3 with torch.no_grad():
----> 4 latent = vae.encode(tfms.ToTensor()(input_im).type(torch.float16).unsqueeze(0).to(torch_device)*2-1) # Note scaling
5 return 0.18215 * latent.latent_dist.sample()
File F:\Python 3.10.8\lib\site-packages\torch\nn\modules\module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File F:\Python 3.10.8\lib\site-packages\diffusers\models\vae.py:130, in Encoder.forward(self, x)
128 def forward(self, x):
129 sample = x
--> 130 sample = self.conv_in(sample)
132 # down
133 for down_block in self.down_blocks:
File F:\Python 3.10.8\lib\site-packages\torch\nn\modules\module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
It looks like the VAE might not be on the GPU - if that's intentional then you'll need to put the image on the CPU too (remove to(torch_device) and that should do it) and only move the latents to the GPU after the VAE encode step. You'll also need to put the latents back onto the CPU at the end before decoding them to view the resulting image.
If you want the VAE on the GPU check that you call vae.to(torch_device) somewhere.
while encode image to the latent space using
latent = vae.encode(tfms.ToTensor()(input_im).unsqueeze(0).to(torch.float16).to(torch_device)*2-1)
it gave error RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same
As my graphics card 8gb I converted vae to torch.float16. Is that the problem.
the whole error is---
RuntimeError Traceback (most recent call last)
Cell In[20], line 2
1 # Encode to the latent space
----> 2 encoded = pil_to_latent(input_image)
3 encoded.shape
4 # Let's visualize the four channels of this latent representation:
Cell In[18], line 4, in pil_to_latent(input_im)
1 def pil_to_latent(input_im):
2 # Single image -> single latent in a batch (so size 1, 4, 64, 64)
3 with torch.no_grad():
----> 4 latent = vae.encode(tfms.ToTensor()(input_im).type(torch.float16).unsqueeze(0).to(torch_device)*2-1) # Note scaling
5 return 0.18215 * latent.latent_dist.sample()
File F:\Python 3.10.8\lib\site-packages\diffusers\models\vae.py:566, in AutoencoderKL.encode(self, x, return_dict)
565 def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> AutoencoderKLOutput:
--> 566 h = self.encoder(x)
567 moments = self.quant_conv(h)
568 posterior = DiagonalGaussianDistribution(moments)
File F:\Python 3.10.8\lib\site-packages\torch\nn\modules\module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File F:\Python 3.10.8\lib\site-packages\diffusers\models\vae.py:130, in Encoder.forward(self, x)
128 def forward(self, x):
129 sample = x
--> 130 sample = self.conv_in(sample)
132 # down
133 for down_block in self.down_blocks:
File F:\Python 3.10.8\lib\site-packages\torch\nn\modules\module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File F:\Python 3.10.8\lib\site-packages\torch\nn\modules\conv.py:463, in Conv2d.forward(self, input)
462 def forward(self, input: Tensor) -> Tensor:
--> 463 return self._conv_forward(input, self.weight, self.bias)
File F:\Python 3.10.8\lib\site-packages\torch\nn\modules\conv.py:459, in Conv2d._conv_forward(self, input, weight, bias)
455 if self.padding_mode != 'zeros':
456 return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
457 weight, bias, self.stride,
458 _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.stride,
460 self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same
The text was updated successfully, but these errors were encountered: