-
Notifications
You must be signed in to change notification settings - Fork 30
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
56beb38
commit a19f31b
Showing
8 changed files
with
870 additions
and
1 deletion.
There are no files selected for viewing
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
defaults: | ||
- logger: base_logger | ||
- arch: anakin | ||
- system: q_learning/rec_r2d2 | ||
- network: rnn_dqn | ||
- env: gymnax/cartpole | ||
- _self_ | ||
|
||
hydra: | ||
searchpath: | ||
- file://stoix/configs |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
# ---Recurrent Structure Networks for PPO --- | ||
|
||
actor_network: | ||
pre_torso: | ||
_target_: stoix.networks.torso.MLPTorso | ||
layer_sizes: [128] | ||
use_layer_norm: False | ||
activation: silu | ||
rnn_layer: | ||
_target_: stoix.networks.base.ScannedRNN | ||
cell_type: gru | ||
hidden_state_dim: 128 | ||
post_torso: | ||
_target_: stoix.networks.torso.MLPTorso | ||
layer_sizes: [128] | ||
use_layer_norm: False | ||
activation: silu | ||
action_head: | ||
_target_: stoix.networks.heads.DiscreteQNetworkHead |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
# --- Defaults Rec-R2D2 --- | ||
|
||
system_name: rec_r2d2 # Name of the system. | ||
|
||
# --- RL hyperparameters --- | ||
rollout_length: 4 # Number of environment steps per vectorised environment. | ||
epochs: 128 # Number of sgd steps per rollout. | ||
warmup_steps: 16 # Number of steps to collect before training. | ||
total_buffer_size: 1_000_000 # Total effective size of the replay buffer across all devices and vectorised update steps. This means each device has a buffer of size buffer_size//num_devices which is further divided by the update_batch_size. This value must be divisible by num_devices*update_batch_size. | ||
total_batch_size: 512 # Total effective number of samples to train on. This means each device has a batch size of batch_size/num_devices which is further divided by the update_batch_size. This value must be divisible by num_devices*update_batch_size. | ||
burn_in_length: 40 # Number of steps to burn in before training. | ||
sample_sequence_length: 80 # Length of the sequence to sample from the buffer. | ||
priority_exponent: 0.5 # exponent for the prioritised experience replay | ||
importance_sampling_exponent: 0.4 # exponent for the importance sampling weights | ||
priority_eta: 0.9 # Balance between max and mean priorities | ||
n_step: 5 # how many steps in the transition to use for the n-step return | ||
q_lr: 6.25e-5 # the learning rate of the Q network network optimizer | ||
tau: 0.005 # smoothing coefficient for target networks | ||
gamma: 0.99 # discount factor | ||
max_grad_norm: 0.5 # Maximum norm of the gradients for a weight update. | ||
decay_learning_rates: False # Whether learning rates should be linearly decayed during training. | ||
training_epsilon: 0.0 # epsilon for the epsilon-greedy policy during training | ||
evaluation_epsilon: 0.0 # epsilon for the epsilon-greedy policy during evaluation | ||
max_abs_reward: 1000.0 # maximum absolute reward value |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.