Skip to content

Commit

Permalink
PR Review Resolution
Browse files Browse the repository at this point in the history
  • Loading branch information
shantanuparab-tr committed Jul 31, 2024
1 parent ed13fd6 commit 4b22dae
Show file tree
Hide file tree
Showing 2 changed files with 121 additions and 115 deletions.
102 changes: 55 additions & 47 deletions docs/operation/hugging_face.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
========================
==================
Hugging Face Guide
========================
==================

Uploading and Downloading Datasets on Hugging Face
==================================================
Expand All @@ -13,22 +13,24 @@ If you don't already have an account, sign up for a new account on the `Hugging
Creating a New Dataset Repository
---------------------------------

1. **Web Interface**:
- Navigate to the `Hugging Face website <https://huggingface.co>`_.
- Log in to your account.
- Click on your profile picture in the top-right corner and select "New dataset."
- Follow the on-screen instructions to create a new dataset repository.
Web Interface
^^^^^^^^^^^^^
#. Navigate to the `Hugging Face website <https://huggingface.co>`_.
#. Log in to your account.
#. Click on your profile picture in the top-right corner and select "New dataset."
#. Follow the on-screen instructions to create a new dataset repository.

2. **Command Line Interface (CLI)**:
- Ensure you have the ``huggingface_hub`` library installed.
- Use the following Python script to create a new repository:
Command Line Interface (CLI)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#. Ensure you have the `huggingface_hub <https://huggingface.co/docs/huggingface_hub/index>`_ library installed.
#. Use the following Python script to create a new repository:

.. code-block:: python
.. code-block:: python
from huggingface_hub import HfApi
api = HfApi()
from huggingface_hub import HfApi
api = HfApi()
api.create_repo(repo_id="username/repository_name", repo_type="dataset")
api.create_repo(repo_id="username/repository_name", repo_type="dataset")
For more information on creating repositories, refer to the `Hugging Face Repositories <https://huggingface.co/docs/hub/repositories>`_.

Expand All @@ -37,40 +39,42 @@ Uploading Your Dataset

You have two primary methods to upload datasets: through the web interface or using the Python API.

1. **Web Interface**
Web Interface
^^^^^^^^^^^^^

i. Navigate to your dataset repository on the Hugging Face website.
ii. Click on the "Files and versions" tab.
iii. Drag and drop your dataset files into the files section.
iv. Click "Commit changes" to save the files in the repository.
#. Navigate to your dataset repository on the Hugging Face website.
#. Click on the "Files and versions" tab.
#. Drag and drop your dataset files into the files section.
#. Click "Commit changes" to save the files in the repository.

2. **Python API**
Python API
^^^^^^^^^^

You can use the following Python script to upload your dataset:

.. code-block:: python
.. code-block:: python
from huggingface_hub import HfApi
api = HfApi()
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
folder_path="path/to/dataset",
repo_id="username/repository_name",
repo_type="dataset",
)
api.upload_folder(
folder_path="path/to/dataset",
repo_id="username/repository_name",
repo_type="dataset",
)
**Example**:

.. code-block:: python
.. code-block:: python
from huggingface_hub import HfApi
api = HfApi()
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
folder_path="~/aloha_data/aloha_stationary_block_pickup",
repo_id="TrossenRoboticsCommunity/aloha_static_datasets",
repo_type="dataset",
)
api.upload_folder(
folder_path="~/aloha_data/aloha_stationary_block_pickup",
repo_id="TrossenRoboticsCommunity/aloha_static_datasets",
repo_type="dataset",
)
For more information on uploading datasets, refer to the `Hugging Face Uploading <https://huggingface.co/docs/hub/upload>`_.

Expand All @@ -79,27 +83,31 @@ Downloading Datasets

You can download datasets either by cloning the repository or using the Hugging Face CLI.

1. **Cloning the Repository**
Cloning the Repository
^^^^^^^^^^^^^^^^^^^^^^

To clone the repository, use the following command:

.. code-block:: bash
.. code-block:: bash
git clone https://huggingface.co/datasets/username/repository_name
$ git clone https://huggingface.co/datasets/username/repository_name
2. **Using the Hugging Face CLI**
Using the Hugging Face CLI
^^^^^^^^^^^^^^^^^^^^^^^^^^

You can also use the Hugging Face CLI to download datasets with the following Python script:

.. code-block:: python
.. code-block:: python
from huggingface_hub import snapshot_download
from huggingface_hub import snapshot_download
# Download the dataset
snapshot_download(repo_id="username/repository_name",
repo_type="dataset",
local_dir="path/to/local/directory",
allow_patterns="*.hdf5")
# Download the dataset
snapshot_download(
repo_id="username/repository_name",
repo_type="dataset",
local_dir="path/to/local/directory",
allow_patterns="*.hdf5"
)
.. note::

Expand Down
134 changes: 66 additions & 68 deletions docs/operation/training.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
========================
=======================
Training and Evaluation
========================
=======================



Expand All @@ -10,82 +10,82 @@ Virtual Environment Setup
Effective containerization is important when it comes to running machine learning models as there can be conflicting dependencies. You can either use a Virtual Environment or Conda.

Virtual Environment Installation and Setup
----------------------------------------------
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

#. Install the virtual environment package:

.. code-block:: bash
.. code-block:: bash
sudo apt-get install python3-venv
$ sudo apt-get install python3-venv
#. Create a virtual environment:

.. code-block:: bash
.. code-block:: bash
python3 -m venv ~/act # Creates a venv "act" in the home directory, can be created anywhere
$ python3 -m venv ~/act # Creates a venv "act" in the home directory, can be created anywhere
#. Activate the virtual environment:

.. code-block:: bash
.. code-block:: bash
source ~/act/bin/activate
$ source act/bin/activate
Conda Setup
----------------------------------------------
^^^^^^^^^^^

#. Create a virtual environment:

.. code-block:: bash
.. code-block:: bash
conda create -n aloha python=3.8.10
$ conda create -n aloha python=3.8.10
#. Activate the virtual environment:

.. code-block:: bash
.. code-block:: bash
conda activate aloha
$ conda activate aloha
Install Dependencies
===============================================
^^^^^^^^^^^^^^^^^^^^

Install the necessary dependencies inside your containerized environment:

.. code-block:: bash
pip install dm_control==1.0.14
pip install einops
pip install h5py
pip install ipython
pip install matplotlib
pip install mujoco==2.3.7
pip install opencv-python
pip install packaging
pip install pexpect
pip install pyquaternion
pip install pyyaml
pip install rospkg
pip install torch
pip install torchvision
$ pip install dm_control==1.0.14
$ pip install einops
$ pip install h5py
$ pip install ipython
$ pip install matplotlib
$ pip install mujoco==2.3.7
$ pip install opencv-python
$ pip install packaging
$ pip install pexpect
$ pip install pyquaternion
$ pip install pyyaml
$ pip install rospkg
$ pip install torch
$ pip install torchvision
Clone Repository
=========================
================

Clone ACT if using Aloha Stationary

.. code-block:: bash
git clone https://github.com/shantanuparab-tr/act.git act_training_evaluation
$ git clone https://github.com/Interbotix/act.git act_training_evaluation
Clone ACT++ if using Aloha Mobile

.. code-block:: bash
git clone https://github.com/shantanuparab-tr/act_plus_plus.git act_training_evaluation
$ git clone https://github.com/Interbotix/act_plus_plus.git act_training_evaluation
Build and Install ACT Models
===================================
============================

.. code-block:: bash
:emphasize-lines: 4
Expand Down Expand Up @@ -113,49 +113,49 @@ Navigate to the ``detr`` directory inside the repository and install the detr mo

.. code-block:: bash
cd /path/to/act/detr && pip install -e .
$ cd /path/to/act/detr && pip install -e .
Training
=============
========

To start the training, follow the steps below:

1. **Sanity Check**:
#. Sanity Check:

Ensure you have all the hdf5 episodes located in the correct folder after following the data collection steps :ref:`operation/data_collection:Task Creation`.
Ensure you have all the hdf5 episodes located in the correct folder after following the data collection steps :ref:`operation/data_collection:Task Creation`.

2. **Source ROS Environment**:
#. Source ROS Environment:

.. code-block:: bash
source /opt/ros/humble/setup.bash
source interbotix_ws/install/setup.bash
$ source /opt/ros/humble/setup.bash
$ source interbotix_ws/install/setup.bash
3. **Activate Virtual Environment**:
#. Activate Virtual Environment:

.. code-block:: bash
source act/bin/activate
$ source act/bin/activate
4. **Start Training**:
#. Start Training

.. code-block:: bash
cd repo/act/
python3 imitate_episodes.py \
--task_name aloha_stationary_dummy \
--ckpt_dir <ckpt dir> \
--policy_class ACT \
--kl_weight 10 \
--chunk_size 100 \
--hidden_dim 512 \
--batch_size 8 \
--dim_feedforward 3200 \
--num_epochs 2000 \
--lr 1e-5 \
--seed 0
.. note::
$ cd /path/to/act/repository/
$ python3 imitate_episodes.py \
--task_name aloha_stationary_dummy \
--ckpt_dir <ckpt dir> \
--policy_class ACT \
--kl_weight 10 \
--chunk_size 100 \
--hidden_dim 512 \
--batch_size 8 \
--dim_feedforward 3200 \
--num_epochs 2000 \
--lr 1e-5 \
--seed 0
.. tip::

- ``task_name`` argument should match one of the task names in the ``TASK_CONFIGS``, as configured in the :ref:`operation/data_collection:Task Creation` section.
- ``ckpt_dir``: The relative location where the checkpoints and best policy will be stored.
Expand Down Expand Up @@ -189,27 +189,25 @@ We recommend the following parameters:
- 1e-5

Evaluation
=====================
==========

To evaluate a trained model, follow the steps below:

1. **Bring up the ALOHA control stack** according to your platform:
#. Bring up the ALOHA

- Stationary: :ref:`operation/stationary:Running ALOHA Bringup`
- Mobile: :ref:`operation/mobile:Running ALOHA Bringup`


2. **Configure the environment**:
#. Configure the environment

.. code-block:: bash
source /opt/ros/humble/setup.bash # Configure ROS system install environment
source ~/interbotix_ws/install/setup.bash # Configure ROS workspace environment
source /<path_to_aloha_venv>/bin/activate # Configure ALOHA Python environment
cd ~/<act_repository>/act/
$ source /opt/ros/humble/setup.bash # Configure ROS system install environment
$ source interbotix_ws/install/setup.bash # Configure ROS workspace environment
$ source /<path_to_aloha_venv>/bin/activate # Configure ALOHA Python environment
$ cd ~/<act_repository>/act/
3. **Run the evaluation script**
#. Run the evaluation script

.. code-block:: bash
:emphasize-lines: 13-14
Expand Down

0 comments on commit 4b22dae

Please sign in to comment.