Skip to content
This repository has been archived by the owner on Sep 25, 2024. It is now read-only.

Use larger batch_size for more efficient computations #3

Open
bzamecnik opened this issue Dec 12, 2017 · 0 comments
Open

Use larger batch_size for more efficient computations #3

bzamecnik opened this issue Dec 12, 2017 · 0 comments

Comments

@bzamecnik
Copy link

Batch_size 32 is too small and not efficient. For a GPU with 8GB RAM, batch_size 256 is OK. A speedup observed going from 64 to 256 on this model was 2.55x. Note that even for smaller batch sizes CuDNNGRU (#2) is more efficient than ordinary GRU.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant