Skip to content
This repository has been archived by the owner on Dec 11, 2020. It is now read-only.

Startup script example to run trainer on single GPU machine (small board)? #160

Open
marcinbogdanski opened this issue Sep 10, 2019 · 2 comments

Comments

@marcinbogdanski
Copy link

Hi

Thank you for this amazing repository! Super useful.

I would like to step-through both server/client code on single GPU machine to better understand the code. Could you advise on which hyperparameteres to look at first? My current plan is to reduce NN size, num MCTS iterations and reduce minimum queue sizes. But it's quite difficult to "guesstimate" what to change.

The issue is training scripts assume large computational cluster being available. The default setup does not seem to do single NN update after 12h of "training". Server shows "Stats 550/0/0" slowly increment. I assume it is trying to fill up minimum queue length before starting training.

Would it be possible to provide alternative startup scripts for 9x9 or 5x5 board (3x3?), preferably with minimum queue sizes and other hyperparameters adjusted such that first train cycle kicks in first couple minutes? In a way that still utilises same code pathways as normal 19x19 version.

Thanks again for great work!

@marcinbogdanski
Copy link
Author

OK, I managed to get it to train.

TLDR: It seems NN training starts after clients generate q_min_size * num_reader games. So the solution was to reduce q_min_size, num_reader, reduce NN model size and re-combile for 9x9 board to speed up game generation.

NOTE: hyperparameters below are selected to force fairly quick start of training updates on neural network. These parameters are probably useless for anything else than debug.

  1. Obviously make sure code base compiles and runs w/o any modifications first. Start server, start client, confirm they connect and generate games. To run server/clients on same machine set "myserver": "[127.0.0.1]" in server_adddr.py

  2. Compile code base for 9x9 GO, e.g. add set(BOARD9x9 TRUE) in CMakeLists.txt, then rebuild everything. You should see "Use 9x9 board" appear when compilation starts.

  3. Change start_server.sh as follows:

diff --git a/scripts/elfgames/go/start_server.sh b/scripts/elfgames/go/start_server.sh
index 7f14334..8078bd8 100755
--- a/scripts/elfgames/go/start_server.sh
+++ b/scripts/elfgames/go/start_server.sh
@@ -21,12 +21,12 @@ save=./myserver game=elfgames.go.game model=df_kl model_file=elfgames.go.df_mode
     --resign_thres 0.01    --gpu 0 \
     --server_id myserver     --eval_num_games 400 \
     --eval_winrate_thres 0.55     --port 1234 \
-    --q_min_size 200     --q_max_size 4000 \
+    --q_min_size 20      --q_max_size 400    --num_reader 4  \
     --save_first     \
-    --num_block 20     --dim 256 \
+    --num_block 2      --dim 16 \
     --weight_decay 0.0002    --opt_method sgd \
-    --bn_momentum=0 --num_cooldown=50 \
+    --bn_momentum=0 --num_cooldown=2 \
     --expected_num_client 496 \
     --selfplay_init_num 0 --selfplay_update_num 0 \
     --eval_num_games 0 --selfplay_async \
-    --lr 0.01    --momentum 0.9     1>> log.log 2>&1 &
+    --lr 0.01    --momentum 0.9
  1. Change start_client.sh as follows:
diff --git a/scripts/elfgames/go/start_client.sh b/scripts/elfgames/go/start_client.sh
index a716443..8bb2437 100755
--- a/scripts/elfgames/go/start_client.sh
+++ b/scripts/elfgames/go/start_client.sh
@@ -11,13 +11,13 @@ echo $PYTHONPATH $SLURMD_NODENAME $CUDA_VISIBLE_DEVICES
 root=./myserver game=elfgames.go.game model=df_pred model_file=elfgames.go.df_model3 \
 stdbuf -o 0 -e 0 python ./selfplay.py \
     --T 1    --batchsize 128 \
-    --dim0 256    --dim1 256    --gpu 0 \
+    --dim0 16     --dim1 16     --gpu 0 \
     --keys_in_reply V rv    --mcts_alpha 0.03 \
     --mcts_epsilon 0.25    --mcts_persistent_tree \
     --mcts_puct 0.85    --mcts_rollout_per_thread 200 \
     --mcts_threads 8    --mcts_use_prior \
     --mcts_virtual_loss 5   --mode selfplay \
-    --num_block0 20    --num_block1 20 \
+    --num_block0 2     --num_block1 2 \
     --num_games 32    --ply_pass_enabled 160 \
     --policy_distri_cutoff 30    --policy_distri_training_for_all \
     --port 1234 \
  1. I got it work as follows:
  • 1x server: ./start_server.sh
  • 6x clients: ./start_client.sh <- might work with less clients if you're short or RAM
  • after approximately ~1h server shows Stats: 159/0/0 and my breakpoint in MCTSPrediction.update() triggered
  1. my setup: i9 3.8GHz 6-core, single 2080ti, 48GB of RAM. All topped up.

@l1t1
Copy link

l1t1 commented Sep 19, 2019

nice

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants