Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check parameter instantiation #3

Open
alperyeg opened this issue Sep 14, 2020 · 1 comment
Open

Check parameter instantiation #3

alperyeg opened this issue Sep 14, 2020 · 1 comment

Comments

@alperyeg
Copy link
Member

alperyeg commented Sep 14, 2020

In the optimizer the parameter naming is not standardized due to the namedtuple parameters and the class arguments. There should be an easier way to create parameters.

@alperyeg
Copy link
Member Author

Optimizer Parameters

Crossentropy

  • pop_size
  • rho: elite fraction
  • smoothing: scalar for the distribution
  • temp_decay: decay factor
  • n_iteration: number of iterations to perform
  • distribution: function
  • stop_criterion
  • seed

Evolution

  • seed
  • popsize
  • CXPB: crossover probability
  • MUTPB: mutation probability
  • NGEN: number of generations
  • indpb: probability of mutation of each element in individual
  • tournsize: size of the tournament
  • matepar: blending two values during mating

EvolutionStrategies

  • learning_rate
  • noise_std
  • mirrored_sampling_enabled
  • fitness_shaping_enabled
  • pop_size
  • n_iteration
  • stop_criterion
  • seed

FACE

  • min_pop_size
  • max_pop_size
  • n_elite
  • smoothing
  • temp_decay
  • n_iteration
  • distribution
  • stop_criterion
  • n_expand: amount by which the sample size is increased

Gradient Descent

Classic GD

  • learning_rate
  • exploration_step_size: standard deviation of random steps used
  • n_random_steps: amount of random steps used to estimate gradient
  • n_iteration
  • stop_criterion

Stochastic GD

  • learning_rate
  • stochastic_deviation: standard deviation of the random vector used to perturbate the gradient
  • stochastic_decay: decay of the influence of the random vector that is added to the gradient
  • exploration_step_size
  • n_random_steps
  • n_iteration
  • stop_criterion

Adam

  • learning_rate
  • exploration_step_size
  • n_random_steps
  • first_order_decay: amount of decay of the historic first order momentum per gradient descent step
  • second_order_decay: amount of decay of the historic second order momentum per gradient descent step
  • n_iteration
  • stop_criterion

RMSPROP

  • learning_rate
  • exploration_step_size
  • n_random_steps
  • momentum_decay
  • n_iteration
  • stop_criterion
  • seed

Gridsearch

  • param_grid: (lower_bound, higher_bound, n_steps)

NaturalEvolutionStrategies

  • learning_rate_mu
  • learning_rate_sigma
  • mu
  • sigma
  • mirrored_sampling_enabled
  • fitness_shaping_enabled
  • pop_size
  • n_iteration
  • stop_criterion
  • seed

ParallelTempering

  • n_parallel_runs: parallel Simulated Annealing runs
  • noisy_step: size of the random step
  • decay_parameters
  • n_iteration
  • stop_criterion
  • seed
  • cooling_schedules
  • temperature_bounds

SimulatedAnnealing

  • n_parallel_runs
  • noisy_step
  • temp_decay
  • n_iteration
  • stop_criterion
  • seed
  • cooling_schedule

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant