From 577d3b799f92e1909704efeb6246aed8e9f41f8f Mon Sep 17 00:00:00 2001 From: Shantanu Date: Wed, 31 Jul 2024 15:35:48 -0500 Subject: [PATCH] Resolving PR review --- docs/operation/hugging_face.rst | 60 ++++++++++++------------ docs/operation/training.rst | 81 +++++++++++++++++---------------- 2 files changed, 74 insertions(+), 67 deletions(-) diff --git a/docs/operation/hugging_face.rst b/docs/operation/hugging_face.rst index 68d5689..1f444ec 100644 --- a/docs/operation/hugging_face.rst +++ b/docs/operation/hugging_face.rst @@ -15,6 +15,7 @@ Creating a New Dataset Repository Web Interface ^^^^^^^^^^^^^ + #. Navigate to the `Hugging Face website `_. #. Log in to your account. #. Click on your profile picture in the top-right corner and select "New dataset." @@ -22,8 +23,9 @@ Web Interface Command Line Interface (CLI) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - #. Ensure you have the `huggingface_hub `_ library installed. - #. Use the following Python script to create a new repository: + +#. Ensure you have the `huggingface_hub `_ library installed. +#. Use the following Python script to create a new repository: .. code-block:: python @@ -52,29 +54,29 @@ Python API You can use the following Python script to upload your dataset: - .. code-block:: python +.. code-block:: python - from huggingface_hub import HfApi - api = HfApi() + from huggingface_hub import HfApi + api = HfApi() - api.upload_folder( - folder_path="path/to/dataset", - repo_id="username/repository_name", - repo_type="dataset", - ) + api.upload_folder( + folder_path="path/to/dataset", + repo_id="username/repository_name", + repo_type="dataset", + ) **Example**: - .. code-block:: python +.. code-block:: python - from huggingface_hub import HfApi - api = HfApi() + from huggingface_hub import HfApi + api = HfApi() - api.upload_folder( - folder_path="~/aloha_data/aloha_stationary_block_pickup", - repo_id="TrossenRoboticsCommunity/aloha_static_datasets", - repo_type="dataset", - ) + api.upload_folder( + folder_path="~/aloha_data/aloha_stationary_block_pickup", + repo_id="TrossenRoboticsCommunity/aloha_static_datasets", + repo_type="dataset", + ) For more information on uploading datasets, refer to the `Hugging Face Uploading `_. @@ -88,26 +90,26 @@ Cloning the Repository To clone the repository, use the following command: - .. code-block:: bash +.. code-block:: bash - $ git clone https://huggingface.co/datasets/username/repository_name + $ git clone https://huggingface.co/datasets/username/repository_name Using the Hugging Face CLI ^^^^^^^^^^^^^^^^^^^^^^^^^^ You can also use the Hugging Face CLI to download datasets with the following Python script: - .. code-block:: python + .. code-block:: python - from huggingface_hub import snapshot_download + from huggingface_hub import snapshot_download - # Download the dataset - snapshot_download( - repo_id="username/repository_name", - repo_type="dataset", - local_dir="path/to/local/directory", - allow_patterns="*.hdf5" - ) + # Download the dataset + snapshot_download( + repo_id="username/repository_name", + repo_type="dataset", + local_dir="path/to/local/directory", + allow_patterns="*.hdf5" + ) .. note:: diff --git a/docs/operation/training.rst b/docs/operation/training.rst index 002a631..ae28279 100644 --- a/docs/operation/training.rst +++ b/docs/operation/training.rst @@ -7,7 +7,8 @@ Training and Evaluation Virtual Environment Setup ========================= -Effective containerization is important when it comes to running machine learning models as there can be conflicting dependencies. You can either use a Virtual Environment or Conda. +Effective containerization is important when it comes to running machine learning models as there can be conflicting dependencies. +You can either use a Virtual Environment or Conda. Virtual Environment Installation and Setup ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -52,20 +53,20 @@ Install the necessary dependencies inside your containerized environment: .. code-block:: bash - $ pip install dm_control==1.0.14 - $ pip install einops - $ pip install h5py - $ pip install ipython - $ pip install matplotlib - $ pip install mujoco==2.3.7 - $ pip install opencv-python - $ pip install packaging - $ pip install pexpect - $ pip install pyquaternion - $ pip install pyyaml - $ pip install rospkg - $ pip install torch - $ pip install torchvision + $ pip install dm_control==1.0.14 + $ pip install einops + $ pip install h5py + $ pip install ipython + $ pip install matplotlib + $ pip install mujoco==2.3.7 + $ pip install opencv-python + $ pip install packaging + $ pip install pexpect + $ pip install pyquaternion + $ pip install pyyaml + $ pip install rospkg + $ pip install torch + $ pip install torchvision Clone Repository ================ @@ -74,6 +75,7 @@ Clone ACT if using Aloha Stationary .. code-block:: bash + $ cd ~ $ git clone https://github.com/Interbotix/act.git act_training_evaluation @@ -81,9 +83,9 @@ Clone ACT++ if using Aloha Mobile .. code-block:: bash + $ cd ~ $ git clone https://github.com/Interbotix/act_plus_plus.git act_training_evaluation - Build and Install ACT Models ============================ @@ -155,7 +157,7 @@ To start the training, follow the steps below: --lr 1e-5 \ --seed 0 -.. tip:: +.. note:: - ``task_name`` argument should match one of the task names in the ``TASK_CONFIGS``, as configured in the :ref:`operation/data_collection:Task Creation` section. - ``ckpt_dir``: The relative location where the checkpoints and best policy will be stored. @@ -166,27 +168,30 @@ To start the training, follow the steps below: - ``num_epochs``: Too many epochs lead to overfitting; too few epochs may not allow the model to learn. - ``lr``: Higher learning rate can lead to faster convergence but may overshoot the optima, while lower learning rate might lead to slower but stable optimization. -We recommend the following parameters: - -.. list-table:: - :align: center - :widths: 25 75 - :header-rows: 1 - - * - Parameter - - Value - * - Policy Class - - ACT - * - KL Weight - - 10 - * - Chunk Size - - 100 - * - Batch Size - - 2 - * - Num of Epochs - - 3000 - * - Learning Rate - - 1e-5 + +.. tip:: + + We recommend the following parameters: + + .. list-table:: + :align: center + :widths: 25 75 + :header-rows: 1 + + * - Parameter + - Value + * - Policy Class + - ACT + * - KL Weight + - 10 + * - Chunk Size + - 100 + * - Batch Size + - 2 + * - Num of Epochs + - 3000 + * - Learning Rate + - 1e-5 Evaluation ==========