You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying to train the model with a data set of 40x4096x4096x3 (NWHC), but the process was always killed, as shown in the following snapshot. This doesn't happen if I switch to a smaller data set (10x4096x4096) and the train goes well.
The dataset was originally a single tiff file and was transformed into a zarr file by using function zarr.convinient.save(). The data set was then split into train, val, and test by using the code multiscale_zarr_data_generator.py. Then started training by run.py.
The computation system I used has 68G(?) CPU memory and 16G GPU memory as shown in the following snapshot:
The text was updated successfully, but these errors were encountered:
I was trying to train the model with a data set of 40x4096x4096x3 (NWHC), but the process was always killed, as shown in the following snapshot. This doesn't happen if I switch to a smaller data set (10x4096x4096) and the train goes well.
The dataset was originally a single tiff file and was transformed into a zarr file by using function
zarr.convinient.save()
. The data set was then split into train, val, and test by using the codemultiscale_zarr_data_generator.py
. Then started training byrun.py
.The computation system I used has 68G(?) CPU memory and 16G GPU memory as shown in the following snapshot:
The text was updated successfully, but these errors were encountered: