Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vqnsp training batch size #38

Open
zeydabadi opened this issue Aug 19, 2024 · 1 comment
Open

vqnsp training batch size #38

zeydabadi opened this issue Aug 19, 2024 · 1 comment

Comments

@zeydabadi
Copy link

Hi,

According to Table 3 of the paper, the batch size should be 1024, while in the example command line you provided, it is set to 128.
Could you clarify which one is correct?

OMP_NUM_THREADS=1 torchrun --nnodes=1 --nproc_per_node=8 run_vqnsp_training.py
--output_dir ./checkpoints/vqnsp/
--log_dir ./log/vqnsp/
--model vqnsp_encoder_base_decoder_3x200x12
--codebook_n_emd 8192
--codebook_emd_dim 64
--quantize_kmeans_init
--batch_size 128
--opt adamw
--opt_betas 0.9 0.99
--weight_decay 1e-4
--warmup_epochs 10
--epochs 100
--save_ckpt_freq 20

@935963004
Copy link
Owner

We use 8 GPUs so the total batch size is 8 * 128 = 1024.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants