You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 9, 2024. It is now read-only.
Currently, the torch DataLoader uses blocking data loading. Although loading is very fast (we store the NumPy arrays in-memory), transfer to GPU and data augmentation (which is done on CPU) can slow things done.
Using workers > 0 would make data loading asynchronous and workers > 1 could increase speed somewhat.
TODO:
Benchmark speed gain using asynchronous data loading
Implement asynchronous data loading for all DataLoader objects
Add a user-input option to define the number of jobs
The text was updated successfully, but these errors were encountered:
Currently, the torch DataLoader uses blocking data loading. Although loading is very fast (we store the NumPy arrays in-memory), transfer to GPU and data augmentation (which is done on CPU) can slow things done.
Using workers > 0 would make data loading asynchronous and workers > 1 could increase speed somewhat.
TODO:
The text was updated successfully, but these errors were encountered: