You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
UserWarning: Using a target size (torch.Size([64])) that is different to the input size (torch.Size([64, 384, 64, 64])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
The text was updated successfully, but these errors were encountered:
/home/densogup-1/lyj/dev/venv/lib/python3.8/site-packages/torch/nn/modules/loss.py:529: UserWarning: Using a target size (torch.Size([64])) that is different to the input size (torch.Size([64, 384, 64, 64])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
Traceback (most recent call last):
File "/media/densogup-1/8T/jyp/myad/src/pretrained_teacher.py", line 93, in
loss.backward()
File "/home/densogup-1/lyj/dev/venv/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/densogup-1/lyj/dev/venv/lib/python3.8/site-packages/torch/autograd/init.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Found dtype Long but expected Float
RuntimeError: The expanded size of the tensor (64) must match the existing size (16) at non-singleton dimension 3. Target sizes: [16, 384, 64, 64]. Tensor sizes: [16]
UserWarning: Using a target size (torch.Size([64])) that is different to the input size (torch.Size([64, 384, 64, 64])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
The text was updated successfully, but these errors were encountered: