Skip to content

Commit

Permalink
Resolving PR review
Browse files Browse the repository at this point in the history
  • Loading branch information
shantanuparab-tr committed Jul 31, 2024
1 parent 51bf62d commit 577d3b7
Show file tree
Hide file tree
Showing 2 changed files with 74 additions and 67 deletions.
60 changes: 31 additions & 29 deletions docs/operation/hugging_face.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,17 @@ Creating a New Dataset Repository

Web Interface
^^^^^^^^^^^^^

#. Navigate to the `Hugging Face website <https://huggingface.co>`_.
#. Log in to your account.
#. Click on your profile picture in the top-right corner and select "New dataset."
#. Follow the on-screen instructions to create a new dataset repository.

Command Line Interface (CLI)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#. Ensure you have the `huggingface_hub <https://huggingface.co/docs/huggingface_hub/index>`_ library installed.
#. Use the following Python script to create a new repository:

#. Ensure you have the `huggingface_hub <https://huggingface.co/docs/huggingface_hub/index>`_ library installed.
#. Use the following Python script to create a new repository:

.. code-block:: python
Expand Down Expand Up @@ -52,29 +54,29 @@ Python API

You can use the following Python script to upload your dataset:

.. code-block:: python
.. code-block:: python
from huggingface_hub import HfApi
api = HfApi()
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
folder_path="path/to/dataset",
repo_id="username/repository_name",
repo_type="dataset",
)
api.upload_folder(
folder_path="path/to/dataset",
repo_id="username/repository_name",
repo_type="dataset",
)
**Example**:

.. code-block:: python
.. code-block:: python
from huggingface_hub import HfApi
api = HfApi()
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
folder_path="~/aloha_data/aloha_stationary_block_pickup",
repo_id="TrossenRoboticsCommunity/aloha_static_datasets",
repo_type="dataset",
)
api.upload_folder(
folder_path="~/aloha_data/aloha_stationary_block_pickup",
repo_id="TrossenRoboticsCommunity/aloha_static_datasets",
repo_type="dataset",
)
For more information on uploading datasets, refer to the `Hugging Face Uploading <https://huggingface.co/docs/hub/upload>`_.

Expand All @@ -88,26 +90,26 @@ Cloning the Repository

To clone the repository, use the following command:

.. code-block:: bash
.. code-block:: bash
$ git clone https://huggingface.co/datasets/username/repository_name
$ git clone https://huggingface.co/datasets/username/repository_name
Using the Hugging Face CLI
^^^^^^^^^^^^^^^^^^^^^^^^^^

You can also use the Hugging Face CLI to download datasets with the following Python script:

.. code-block:: python
.. code-block:: python
from huggingface_hub import snapshot_download
from huggingface_hub import snapshot_download
# Download the dataset
snapshot_download(
repo_id="username/repository_name",
repo_type="dataset",
local_dir="path/to/local/directory",
allow_patterns="*.hdf5"
)
# Download the dataset
snapshot_download(
repo_id="username/repository_name",
repo_type="dataset",
local_dir="path/to/local/directory",
allow_patterns="*.hdf5"
)
.. note::

Expand Down
81 changes: 43 additions & 38 deletions docs/operation/training.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@ Training and Evaluation
Virtual Environment Setup
=========================

Effective containerization is important when it comes to running machine learning models as there can be conflicting dependencies. You can either use a Virtual Environment or Conda.
Effective containerization is important when it comes to running machine learning models as there can be conflicting dependencies.
You can either use a Virtual Environment or Conda.

Virtual Environment Installation and Setup
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -52,20 +53,20 @@ Install the necessary dependencies inside your containerized environment:

.. code-block:: bash
$ pip install dm_control==1.0.14
$ pip install einops
$ pip install h5py
$ pip install ipython
$ pip install matplotlib
$ pip install mujoco==2.3.7
$ pip install opencv-python
$ pip install packaging
$ pip install pexpect
$ pip install pyquaternion
$ pip install pyyaml
$ pip install rospkg
$ pip install torch
$ pip install torchvision
$ pip install dm_control==1.0.14
$ pip install einops
$ pip install h5py
$ pip install ipython
$ pip install matplotlib
$ pip install mujoco==2.3.7
$ pip install opencv-python
$ pip install packaging
$ pip install pexpect
$ pip install pyquaternion
$ pip install pyyaml
$ pip install rospkg
$ pip install torch
$ pip install torchvision
Clone Repository
================
Expand All @@ -74,16 +75,17 @@ Clone ACT if using Aloha Stationary

.. code-block:: bash
$ cd ~
$ git clone https://github.com/Interbotix/act.git act_training_evaluation
Clone ACT++ if using Aloha Mobile

.. code-block:: bash
$ cd ~
$ git clone https://github.com/Interbotix/act_plus_plus.git act_training_evaluation
Build and Install ACT Models
============================

Expand Down Expand Up @@ -155,7 +157,7 @@ To start the training, follow the steps below:
--lr 1e-5 \
--seed 0
.. tip::
.. note::

- ``task_name`` argument should match one of the task names in the ``TASK_CONFIGS``, as configured in the :ref:`operation/data_collection:Task Creation` section.
- ``ckpt_dir``: The relative location where the checkpoints and best policy will be stored.
Expand All @@ -166,27 +168,30 @@ To start the training, follow the steps below:
- ``num_epochs``: Too many epochs lead to overfitting; too few epochs may not allow the model to learn.
- ``lr``: Higher learning rate can lead to faster convergence but may overshoot the optima, while lower learning rate might lead to slower but stable optimization.

We recommend the following parameters:

.. list-table::
:align: center
:widths: 25 75
:header-rows: 1

* - Parameter
- Value
* - Policy Class
- ACT
* - KL Weight
- 10
* - Chunk Size
- 100
* - Batch Size
- 2
* - Num of Epochs
- 3000
* - Learning Rate
- 1e-5

.. tip::

We recommend the following parameters:

.. list-table::
:align: center
:widths: 25 75
:header-rows: 1

* - Parameter
- Value
* - Policy Class
- ACT
* - KL Weight
- 10
* - Chunk Size
- 100
* - Batch Size
- 2
* - Num of Epochs
- 3000
* - Learning Rate
- 1e-5

Evaluation
==========
Expand Down

0 comments on commit 577d3b7

Please sign in to comment.