You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
python train.py --image_batch 32 --video_batch 32 --use_infogan --use_noise --noise_sigma 0.1 --image_discriminator PatchImageDiscriminator --video_discriminator CategoricalVideoDiscriminator --print_every 100 --every_nth 2 --dim_z_content 50 --dim_z_motion 10 --dim_z_category 4 /slow/junyan/VideoSynthesis/mocogan/data/actions logs/actions
{'--batches': '100000',
'--dim_z_category': '4',
'--dim_z_content': '50',
'--dim_z_motion': '10',
'--every_nth': '2',
'--image_batch': '32',
'--image_dataset': '',
'--image_discriminator': 'PatchImageDiscriminator',
'--image_size': '64',
'--n_channels': '3',
'--noise_sigma': '0.1',
'--print_every': '100',
'--use_categories': False,
'--use_infogan': True,
'--use_noise': True,
'--video_batch': '32',
'--video_discriminator': 'CategoricalVideoDiscriminator',
'--video_length': '16',
'': '/slow/junyan/VideoSynthesis/mocogan/data/actions',
'<log_folder>': 'logs/actions'}
/root/anaconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
/slow/junyan/VideoSynthesis/mocogan/data/actions/local.db
Traceback (most recent call last):
File "train.py", line 104, in
dataset = data.VideoFolderDataset(args[''], cache=os.path.join(args[''], 'local.db'))
File "/slow/junyan/VideoSynthesis/mocogan/src/data.py", line 24, in init
print(pickle.load(f))
EOFError: Ran out of input
Here is the code
class VideoFolderDataset(torch.utils.data.Dataset):
def __init__(self, folder, cache, min_len=32):
dataset = ImageFolder(folder)
self.total_frames = 0
self.lengths = []
self.images = []
print(cache)
if cache is not None and os.path.exists(cache):
with open(cache, 'rb') as f:
print(pickle.load(f))
else:
for idx, (im, categ) in enumerate(
tqdm.tqdm(dataset, desc="Counting total number of frames")):
img_path, _ = dataset.imgs[idx]
shorter, longer = min(im.width, im.height), max(im.width, im.height)
length = longer // shorter
if length >= min_len:
self.images.append((img_path, categ))
self.lengths.append(length)
if cache is not None:
with open(cache, 'wb') as f:
pickle.dump((self.images, self.lengths), f)
self.cumsum = np.cumsum([0] + self.lengths)
print("Total number of frames {}".format(np.sum(self.lengths)))
The text was updated successfully, but these errors were encountered:
The accepted batch size depends on the dataset and your config.
Weizmann Action Dataset has 72 videos and since the drop_last=True in image loader and in video_loader, the max batch size is the dataset length.
To solve the issue, you can duplicate the data to cover your needed batch size (e.g. batch_size = 128, 72*2 > 128). Note that simply setting drop_last=False will not solve your issue.
I solve the problem by edit the file of 'data.py' at line 22.
from:' if cache is not None and os.path.exitsis(cache):
to :'if (cache is not None) and (os.path.getsize(cache) != 0):'
because: the cache file maybe is a 0 byte files and meanwhile, it can not open to write
I try to use it in Python3.
However, the error is reported:
Here is the code
The text was updated successfully, but these errors were encountered: