You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I was using your submission example notebook to generate my submission files and I came across a couple of errors that I tried resolving manually and just wanted to make sure these were the right solutions and see if you could update your code to account for them. Both issues came from the model_predictions function in sensorium/sensorium/utility/submission.py
The first was TypeError: model.forward got an unexpected keyword argument 'data_key' this came from line 29. I fixed this by simply removing the data_key=data_key, **batch_kwargs in the model() call. I think this came from your example model having these arguments in model.forward() but just wanted to check.
The second error was RuntimeError: Given groups=1, weight of size [64, 3, 11, 11], expected input[128, 1, 144, 256] to have 3 channels, but got 1 channels instead. I fixed this one by adding a images = torch.cat([images,images,images],dim=1) on line 20 to convert it to 3 channels. I think an easier solution would be to directly open the images as rgb in the dataloader or somehow generalize the input so it works for models trained with either grayscale or rgb.
Let me know if I messed up other things by doing this or if theres a better work around. Thanks!
The text was updated successfully, but these errors were encountered:
Hi, thank your for raising these issues, and we'll look into it.
A quick question: Did these errors occur when you have used your own model? You are right that there assumptions baked into the model that we need to be spell out more explicitly.
And I agree, it would be best generalize the model's forward signature so that the data_key and the number of input channels are taken care of.
Your solutions do seem straightforward and correct. The first issue could also be solved by adding a **kwargs to the forward function, so it works with or without the data_key arg.
Hi, I was using your submission example notebook to generate my submission files and I came across a couple of errors that I tried resolving manually and just wanted to make sure these were the right solutions and see if you could update your code to account for them. Both issues came from the
model_predictions
function insensorium/sensorium/utility/submission.py
The first was
TypeError: model.forward got an unexpected keyword argument 'data_key'
this came from line 29. I fixed this by simply removing thedata_key=data_key, **batch_kwargs
in themodel()
call. I think this came from your example model having these arguments inmodel.forward()
but just wanted to check.The second error was
RuntimeError: Given groups=1, weight of size [64, 3, 11, 11], expected input[128, 1, 144, 256] to have 3 channels, but got 1 channels instead
. I fixed this one by adding aimages = torch.cat([images,images,images],dim=1)
on line 20 to convert it to 3 channels. I think an easier solution would be to directly open the images as rgb in the dataloader or somehow generalize the input so it works for models trained with either grayscale or rgb.Let me know if I messed up other things by doing this or if theres a better work around. Thanks!
The text was updated successfully, but these errors were encountered: