-
Notifications
You must be signed in to change notification settings - Fork 2
Home
As well known, deep offline RL algorithms are highly sensitive to hyperparameters and small details of implementations. You feel that by skimming through some papers and comparing results. Surprisingly it is known that even different DNN libraries produce different results with the code logics identical [1].
In such situation, it is difficult to ensure the same performance between different codebases. In other words, there is no such thing as the performance of CQL as a unified value. What exists is, or rather, the performance of CQL with xxx hyperparameters written in xxx. Considering this situation, we did our best to choose single reliable existing codebase for each algorithm and tried to transfer that codebase into single file with the same hyperparameters.
Here for each algorithm, we report
- The codebase we referred to (Also in README)
- Published paper using the codebase for baseline experiment (If exists)
- The performance report by the paper, (If there is not, accepted report with different codebase.)
We can run the codebase we refer by ourselves, but it takes time. Furthermore, for those who would like to use jax-corl as baselines in your own research, results from published papers would be more reliable certification to use.
ver | halfcheetah-m | halfcheetah-me | hopper-m | hopper-me | walker2d-med. | walker2d-me |
---|---|---|---|---|---|---|
- | - | - | - | - | - | - |
- | - | - | - | - | - | - |
- Codebase: min-decision-trainsformer
- Paper using the codebase: None
- Results