-
Notifications
You must be signed in to change notification settings - Fork 566
Startup script example to run trainer on single GPU machine (small board)? #160
Comments
OK, I managed to get it to train. TLDR: It seems NN training starts after clients generate NOTE: hyperparameters below are selected to force fairly quick start of training updates on neural network. These parameters are probably useless for anything else than debug.
diff --git a/scripts/elfgames/go/start_server.sh b/scripts/elfgames/go/start_server.sh
index 7f14334..8078bd8 100755
--- a/scripts/elfgames/go/start_server.sh
+++ b/scripts/elfgames/go/start_server.sh
@@ -21,12 +21,12 @@ save=./myserver game=elfgames.go.game model=df_kl model_file=elfgames.go.df_mode
--resign_thres 0.01 --gpu 0 \
--server_id myserver --eval_num_games 400 \
--eval_winrate_thres 0.55 --port 1234 \
- --q_min_size 200 --q_max_size 4000 \
+ --q_min_size 20 --q_max_size 400 --num_reader 4 \
--save_first \
- --num_block 20 --dim 256 \
+ --num_block 2 --dim 16 \
--weight_decay 0.0002 --opt_method sgd \
- --bn_momentum=0 --num_cooldown=50 \
+ --bn_momentum=0 --num_cooldown=2 \
--expected_num_client 496 \
--selfplay_init_num 0 --selfplay_update_num 0 \
--eval_num_games 0 --selfplay_async \
- --lr 0.01 --momentum 0.9 1>> log.log 2>&1 &
+ --lr 0.01 --momentum 0.9
diff --git a/scripts/elfgames/go/start_client.sh b/scripts/elfgames/go/start_client.sh
index a716443..8bb2437 100755
--- a/scripts/elfgames/go/start_client.sh
+++ b/scripts/elfgames/go/start_client.sh
@@ -11,13 +11,13 @@ echo $PYTHONPATH $SLURMD_NODENAME $CUDA_VISIBLE_DEVICES
root=./myserver game=elfgames.go.game model=df_pred model_file=elfgames.go.df_model3 \
stdbuf -o 0 -e 0 python ./selfplay.py \
--T 1 --batchsize 128 \
- --dim0 256 --dim1 256 --gpu 0 \
+ --dim0 16 --dim1 16 --gpu 0 \
--keys_in_reply V rv --mcts_alpha 0.03 \
--mcts_epsilon 0.25 --mcts_persistent_tree \
--mcts_puct 0.85 --mcts_rollout_per_thread 200 \
--mcts_threads 8 --mcts_use_prior \
--mcts_virtual_loss 5 --mode selfplay \
- --num_block0 20 --num_block1 20 \
+ --num_block0 2 --num_block1 2 \
--num_games 32 --ply_pass_enabled 160 \
--policy_distri_cutoff 30 --policy_distri_training_for_all \
--port 1234 \
|
nice |
Hi
Thank you for this amazing repository! Super useful.
I would like to step-through both server/client code on single GPU machine to better understand the code. Could you advise on which hyperparameteres to look at first? My current plan is to reduce NN size, num MCTS iterations and reduce minimum queue sizes. But it's quite difficult to "guesstimate" what to change.
The issue is training scripts assume large computational cluster being available. The default setup does not seem to do single NN update after 12h of "training". Server shows "Stats 550/0/0" slowly increment. I assume it is trying to fill up minimum queue length before starting training.
Would it be possible to provide alternative startup scripts for 9x9 or 5x5 board (3x3?), preferably with minimum queue sizes and other hyperparameters adjusted such that first train cycle kicks in first couple minutes? In a way that still utilises same code pathways as normal 19x19 version.
Thanks again for great work!
The text was updated successfully, but these errors were encountered: