We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
With sharding in petastorm ie:
with peta_conv_train_df.make_torch_dataloader(transform_spec=transform_func, num_epochs=1, batch_size=test_batch_size, cur_shard = curr_shard, shard_count = num_shards, reader_pool_type = pool_type) as reader:
Is the batch_size what we want per GPU or for whole cluster. ie in the above if I had:
batch_size
test_batch_size = 64 then each shard gets 64 or each shard gets 64 / num_shards?
test_batch_size
num_shards
The text was updated successfully, but these errors were encountered:
No branches or pull requests
With sharding in petastorm ie:
Is the
batch_size
what we want per GPU or for whole cluster. ie in the above if I had:test_batch_size
= 64 then each shard gets 64 or each shard gets 64 /num_shards
?The text was updated successfully, but these errors were encountered: