-
Notifications
You must be signed in to change notification settings - Fork 211
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* initial refactor of docs * add placeholder for reproducing experiments tutorial page * updated draft of introduction/overview.md * updated getting started page * getting_started.md: refactor #2 * small change * small change * html boxes for core features * changing some wording * small changes * adding datasets ipython notebook * testing new warning dropdown * update style * run policy tutorial notebook * importing roboturk system features content; wip * added some warning blocks and reference in docs for ipython dataset notebook * update intro WIP * add backwards compatibility functionality for loading configs from checkpoints * finish update of installation doc * better feature box * run policy notebook works now * shortened overview * update the datasets pages * update theme and table * added core features diagram * update core features * updated getting started page * fixed warning box in getting_started * fix folder for core_features.png img * update dataset page * update year in sphinx conf * add train-val split info * notebook and updated doc for pretrained model tutorial * update core_features image; update links to next steps in getting_started page * updated modules overview * fix path to image in modules/overview.md * add link to creating custom env wrapper from dataset overview page * minor updates * fixed several broken links, and style changes * Add missing dependency in requirements-doc.txt * minor reordering in installation docs page * fix datasets doc * fix datasets doc * Fix dataset overview table not centered issue * Remove redundant period * make hdf5 structure dropdown more prominent, remove unused tutorial * restructure a few tutorials with placeholders * fix numbering on installation page, trim down verbosity on robosuite dataset page, move info around * (docs) remove first sentence from contributing.md * (docs) update next links in getting_started.md * (docs) add note on --debug flag in getting_started.md * (docs) added implemented_algorithms.md page in intro section * (docs) initial work on configuring logging in viewing_results.md * (docs) added details on viewing training results in viewing_results.md * replace configs tutorial with ways to launch runs * update next steps and rename dataset tutorial * cleanup dataset contents tutorial and add reproduce experiments tutorial content * slight refactor of algo tutorial * (docs) refined algorithms.md and viewing_results.md * (docs) hyperparam_scan.md less verbose (wip) * fix missing links, modify warning * fix missing links, modify warning * reorder tutorials * (docs) updated logging and viewing results page - json comments, viewing tensorboard results * (docs) updated hyperparam_scan page * (docs) fix dataset sturcture * condense code blocks and make obs modalities into bullet points * Warning blocks in config.md * bump version, add tutorial links * try to fix readme gifs * try one more time * put core features image back in readme * fix readme links * remove some prints Co-authored-by: snasiriany <[email protected]> Co-authored-by: josiah_wong <[email protected]> Co-authored-by: Danfei Xu <[email protected]> Co-authored-by: j96w <[email protected]> Co-authored-by: Yifeng Zhu <[email protected]>
- Loading branch information
1 parent
4a39b1d
commit 5f328b9
Showing
42 changed files
with
3,516 additions
and
1,124 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
# D4RL | ||
|
||
## Overview | ||
The [D4RL](https://arxiv.org/abs/2004.07219) benchmark provides a set of locomotion tasks and demonstration datasets. | ||
|
||
## Downloading | ||
|
||
Use `convert_d4rl.py` in the `scripts/conversion` folder to automatically download and postprocess the D4RL dataset in a single step. For example: | ||
|
||
```sh | ||
# by default, download to robomimic/datasets | ||
$ python convert_d4rl.py --env walker2d-medium-expert-v0 | ||
# download to specific folder | ||
$ python convert_d4rl.py --env walker2d-medium-expert-v0 --folder /path/to/output/folder/ | ||
``` | ||
|
||
- `--env` specifies the dataset to download | ||
- `--folder` specifies where you want to download the dataset. If no folder is provided, the `datasets` folder at the top-level of the repository will be used. | ||
|
||
The script will download the raw hdf5 dataset to `--folder`, and the converted one that is compatible with this repository into the `converted` subfolder. | ||
|
||
## Postprocessing | ||
|
||
No postprocessing is required, assuming the above script is run! | ||
|
||
## D4RL Results | ||
|
||
Below, we provide a table of results on common D4RL datasets using the algorithms included in the released codebase. We follow the convention in the TD3-BC paper, where we average results over the final 10 rollout evaluations, but we use 50 rollouts instead of 10 for each evaluation. Apart from a small handful of the halfcheetah results, the results align with those presented in the [TD3_BC paper](https://arxiv.org/abs/2106.06860). We suspect the halfcheetah results are different because we used `mujoco-py` version `2.0.2.13` in our evaluations, as opposed to `1.5` in order to be consistent with the version we were using for robosuite datasets. The results below were generated with `gym` version `0.17.3` and this `d4rl` [commit](https://github.com/rail-berkeley/d4rl/tree/9b68f31bab6a8546edfb28ff0bd9d5916c62fd1f). | ||
|
||
| | **BCQ** | **CQL** | **TD3-BC** | | ||
| ----------------------------- | ------------- | ------------- | ------------- | | ||
| **HalfCheetah-Medium** | 40.8% (4791) | 38.5% (4497) | 41.7% (4902) | | ||
| **Hopper-Medium** | 36.9% (1181) | 30.7% (980) | 97.9% (3167) | | ||
| **Walker2d-Medium** | 66.4% (3050) | 65.2% (2996) | 77.0% (3537) | | ||
| **HalfCheetah-Medium-Expert** | 74.9% (9016) | 21.5% (2389) | 79.4% (9578) | | ||
| **Hopper-Medium-Expert** | 83.8% (2708) | 111.7% (3614) | 112.2% (3631) | | ||
| **Walker2d-Medium-Expert** | 70.2% (3224) | 77.4% (3554) | 102.0% (4683) | | ||
| **HalfCheetah-Expert** | 94.3% (11427) | 29.2% (3342) | 95.4% (11569) | | ||
| **Hopper-Expert** | 104.7% (3389) | 111.8% (3619) | 112.2% (3633) | | ||
| **Walker2d-Expert** | 80.5% (3699) | 108.0% (4958) | 105.3% (4837) | | ||
|
||
|
||
### Reproducing D4RL Results | ||
|
||
In order to reproduce the results above, first make sure that the `generate_paper_configs.py` script has been run, where the `--dataset_dir` argument is consistent with the folder where the D4RL datasets were downloaded using the `convert_d4rl.py` script. This is also the first step for reproducing results on the released robot manipulation datasets. The `--config_dir` directory used in the script (`robomimic/exps/paper` by default) will contain a `d4rl.sh` script, and a `d4rl` subdirectory that contains all the json configs. The table results above can be generated simply by running the training commands in the shell script. | ||
|
||
## Citation | ||
```sh | ||
@article{fu2020d4rl, | ||
title={D4rl: Datasets for deep data-driven reinforcement learning}, | ||
author={Fu, Justin and Kumar, Aviral and Nachum, Ofir and Tucker, George and Levine, Sergey}, | ||
journal={arXiv preprint arXiv:2004.07219}, | ||
year={2020} | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,57 @@ | ||
# MOMART Datasets and Experiments | ||
|
||
## Overview | ||
[Mobile Manipulation RoboTurk (MoMaRT)](https://sites.google.com/view/il-for-mm/home) datasets are a collection of demonstrations collected on 5 long-horizon robot mobile manipulation tasks in a realistic simulated kitchen. | ||
|
||
<p align="center"> | ||
<img width="19.0%" src="../images/momart_table_setup_from_dishwasher_overview.png"> | ||
<img width="19.0%" src="../images/momart_table_setup_from_dresser_overview.png"> | ||
<img width="19.0%" src="../images/momart_table_cleanup_to_dishwasher_overview.png"> | ||
<img width="19.0%" src="../images/momart_table_cleanup_to_sink_overview.png"> | ||
<img width="19.0%" src="../images/momart_unload_dishwasher_to_dresser_overview.png"> | ||
<img width="19.0%" src="../images/momart_bowl_in_sink.png"> | ||
<img width="19.0%" src="../images/momart_dump_trash.png"> | ||
<img width="19.0%" src="../images/momart_grab_bowl.png"> | ||
<img width="19.0%" src="../images/momart_open_dishwasher.png"> | ||
<img width="19.0%" src="../images/momart_open_dresser.png"> | ||
</p> | ||
|
||
## Downloading | ||
|
||
|
||
<div class="admonition warning"> | ||
<p class="admonition-title">Warning!</p> | ||
|
||
When working with these datasets, please make sure that you have installed [iGibson](http://svl.stanford.edu/igibson/) from source and are on the `momart` branch. Exact steps for installing can be found [HERE](https://sites.google.com/view/il-for-mm/datasets#h.qw0vufk0hknk). | ||
|
||
</div> | ||
|
||
We provide two ways for downloading MOMART datasets: | ||
|
||
### Method 1: Using `download_momart_datasets.py` (Recommended) | ||
`download_momart_datasets.py` is a python script that provides a programmatic way of installing all datasets. This is the preferred method, because this script also sets up a directory structure for the datasets that works out of the box with examples for reproducing [MOMART paper's](https://arxiv.org/abs/2112.05251) results. | ||
|
||
```sh | ||
# Use --help flag to view download and <ARG> options | ||
python <ROBOMIMIC_DIR>/robomimic/scripts/download_momart_datasets.py <ARGS> | ||
``` | ||
|
||
### Method 2: Using Direct Download Links | ||
|
||
For each type of dataset, we also provide a direct download links that will download the raw HDF5 file [HERE](https://sites.google.com/view/il-for-mm/datasets#h.ko0ilbky4y5u). | ||
|
||
## Postprocessing | ||
|
||
No postprocessing is needed for these datasets! | ||
|
||
## Citation | ||
```sh | ||
@inproceedings{wong2022error, | ||
title={Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation}, | ||
author={Wong, Josiah and Tung, Albert and Kurenkov, Andrey and Mandlekar, Ajay and Fei-Fei, Li and Savarese, Silvio and Mart{\'\i}n-Mart{\'\i}n, Roberto}, | ||
booktitle={Conference on Robot Learning}, | ||
pages={1367--1378}, | ||
year={2022}, | ||
organization={PMLR} | ||
} | ||
``` |
Oops, something went wrong.