You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to Table 3 of the paper, the batch size should be 1024, while in the example command line you provided, it is set to 128.
Could you clarify which one is correct?
Hi,
According to Table 3 of the paper, the batch size should be 1024, while in the example command line you provided, it is set to 128.
Could you clarify which one is correct?
OMP_NUM_THREADS=1 torchrun --nnodes=1 --nproc_per_node=8 run_vqnsp_training.py
--output_dir ./checkpoints/vqnsp/
--log_dir ./log/vqnsp/
--model vqnsp_encoder_base_decoder_3x200x12
--codebook_n_emd 8192
--codebook_emd_dim 64
--quantize_kmeans_init
--batch_size 128
--opt adamw
--opt_betas 0.9 0.99
--weight_decay 1e-4
--warmup_epochs 10
--epochs 100
--save_ckpt_freq 20
The text was updated successfully, but these errors were encountered: