You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First thanks for the awesome work of unifying the datasets!
Im currently trying to train a model using datasets from the gs://gresearch/robotics repo, which worked very good for a long time but since the last few days everything kind of slowed down.
Is there currently a heavy load on the google Server or is this a problem on my end?
I tried 2 different Clusters and both showed bad results for tfds.benchmark:
importtensorflowastfimporttensorflow_datasetsastfdstf.config.set_visible_devices([], "GPU")
builder=tfds.builder_from_directory(builder_dir="gs://gresearch/robotics/droid/1.0.0")
ds=builder.as_dataset(split="train[:95%]")
ds=ds.shuffle(1000) # not needed but important for training so i added itds=ds.batch(32).prefetch(buffer_size=tf.data.AUTOTUNE)
tfds.benchmark(ds, batch_size=256)
it=iter(ds)
foriinrange(2):
episode=next(it)
# steps = list(episode['steps'])print("done")
_________________________Examples/sec (Firstincluded) 182.94ex/sec (total: 701440ex, 3834.31sec)
Examples/sec (Firstonly) 2.71ex/sec (total: 256ex, 94.32sec)
Examples/sec (Firstexcluded) 187.48ex/sec (total: 701184ex, 3739.99sec)
And also bad results for "Computing dataset statistics" using the Octo Dataloader for pytorch (from ~200it/s some weeks ago to ~1.66s/it)
Just want to make sure if it is a problem on my end (e.g. change in TF versions / Net connection) or if currently many people load data and it is only heavy load.
The text was updated successfully, but these errors were encountered:
First thanks for the awesome work of unifying the datasets!
Im currently trying to train a model using datasets from the gs://gresearch/robotics repo, which worked very good for a long time but since the last few days everything kind of slowed down.
Is there currently a heavy load on the google Server or is this a problem on my end?
I tried 2 different Clusters and both showed bad results for tfds.benchmark:
And also bad results for "Computing dataset statistics" using the Octo Dataloader for pytorch (from ~200it/s some weeks ago to ~1.66s/it)
Just want to make sure if it is a problem on my end (e.g. change in TF versions / Net connection) or if currently many people load data and it is only heavy load.
The text was updated successfully, but these errors were encountered: