Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trouble with batchsize #2

Open
algo-scope opened this issue Jan 13, 2022 · 3 comments
Open

trouble with batchsize #2

algo-scope opened this issue Jan 13, 2022 · 3 comments

Comments

@algo-scope
Copy link

algo-scope commented Jan 13, 2022

When I want to use a bigger batchsize of 8, I got an error at the last iteration of Epoch 0.

Traceback (most recent call last):
File "/data/StereoSpike/train.py", line 201, in
warmup_chunks_left = warmup_chunks_left.view(batchsize, N_warmup * nfpdm, 2, 260, 346)
RuntimeError: shape '[8, 1, 2, 260, 346]' is invalid for input of size 1079520

So could you explain the meaning of the general parameters? Thanks a lot.

@urancon
Copy link
Owner

urancon commented Jan 13, 2022

Hello, thank you for pointing out this bug !
It is because the drop_last argument is set to False in the train dataloader. Set it to True and that should fix the problem ! I've corrected this in a new commit.

drop_last=True,

Concerning the meaning of the general parameters, I've also put some comments to help in train.py. Which one did you want to know more about ?
Basically, the dataset is a continuous stream of events, with depth labels every 50 ms. I call a "chunk" the duration between 2 subsequent labels (i.e., 50 ms). Chunks are divided into nfpdm frames of smaller size (say, if nfpdm=2, 25 ms). nfpdm stands for "number of frames per depth map". You can choose with N_inference the number of training/testing chunks you want to grab at each dataloader iteration. Similarly, you can grab N_warmup warmup chunks. Warmup chunks are the chunks preceding the train/test chunks.

Did I answer well your questions ? Don't hesitate to ask if you need !

@algo-scope
Copy link
Author

algo-scope commented Jan 14, 2022

Thanks for your explaination!
I've got a new problem. When I ran test.py, tqdm(test_data_loader) raise an IndexError:

Traceback (most recent call last):
File "/data/StereoSpike/test.py", line 109, in
test_data_loader):
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/tqdm/std.py", line 1180, in iter
for obj in iterable:
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 363, in getitem
return self.dataset[self.indices[idx]]
File "/data/StereoSpike/datasets/MVSEC/mvsec_dataset.py", line 221, in getitem
groundtruth = self.labels[index] # 13
IndexError: index 1060 is out of bounds for axis 0 with size 1060

I think it's because that the last index of SPLIT1_TEST_INDICES is 1060, which is out of bounds

@DongZeLiu
Copy link

Thanks for your explaination! I've got a new problem. When I ran test.py, tqdm(test_data_loader) raise an IndexError:

Traceback (most recent call last):
File "/data/StereoSpike/test.py", line 109, in
test_data_loader):
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/tqdm/std.py", line 1180, in iter
for obj in iterable:
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dell/anaconda3/envs/stereoSNN/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 363, in getitem
return self.dataset[self.indices[idx]]
File "/data/StereoSpike/datasets/MVSEC/mvsec_dataset.py", line 221, in getitem
groundtruth = self.labels[index] # 13
IndexError: index 1060 is out of bounds for axis 0 with size 1060

I think it's because that the last index of SPLIT1_TEST_INDICES is 1060, which is out of bounds

Hi, I have met the same problem, have you fix it ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants