+
+ +
+

LeRobot X Aloha User Guide

+
+

Virtual Environment Setup

+

Containerization is crucial for running machine learning models to avoid dependency conflicts. +You can either use a Virtual Environment (venv) or Conda for this purpose.

+
+

Using Virtual Environment (venv)

+
    +
  1. Install the virtual environment package:

    +
    $ sudo apt-get install python3-venv
    +
    +
    +
  2. +
  3. Create a virtual environment:

    +
    $ python3 -m venv ~/lerobot  # Creates a venv "lerobot" in the home directory
    +
    +
    +
  4. +
  5. Activate the virtual environment:

    +
    $ source ~/lerobot/bin/activate
    +
    +
    +
  6. +
+
+
+

Using Conda

+
    +
  1. Create a virtual environment:

    +
    $ conda create -n lerobot python=3.10
    +
    +
    +
  2. +
  3. Activate the virtual environment:

    +
    $ conda activate lerobot
    +
    +
    +
  4. +
+
+

Note

+

Use either venv or Conda based on your preference, but do not mix them to avoid dependency issues.

+
+
+
+
+

Clone Repository

+

For users of Aloha Stationary:

+
    +
  1. Clone the LeRobot repository:

    +
    $ cd ~
    +$ git clone https://github.com/huggingface/lerobot.git
    +
    +
    +
  2. +
+
+
+

Build and Install LeRobot Models

+
    +
  1. Build and install the LeRobot models from source:

    +
    $ cd lerobot && pip install -e .
    +
    +
    +
  2. +
+
+
+

Teleoperation

+

To teleoperate your robot, follow these steps:

+
    +
  1. Find the serial numbers of your robot’s arms and cameras as described in the following documentation:

    + +
  2. +
  3. Update the serial numbers in the configuration file: lerobot/common/configs/robot/aloha.yaml

    +
  4. +
  5. Run the teleoperation script:

    +
    $ python lerobot/scripts/control_robot.py teleoperate \
    +   --robot-path lerobot/configs/robot/aloha.yaml
    +
    +
    +

    You will see logs that include information such as delta time (dt), frequency, and read/write times for the robot arms.

    +
  6. +
  7. You can control the teleoperation frequency using the –fps argument. For example, to set it to 30 FPS:

    +
    $ python lerobot/scripts/control_robot.py teleoperate \
    +   --robot-path lerobot/configs/robot/aloha.yaml --fps 30
    +
    +
    +
  8. +
+
+

Customizing Teleoperation with Hydra

+

You can override the default YAML configurations dynamically using Hydra syntax. +For example, to change the USB ports of the leader and follower arms:

+
$ python lerobot/scripts/control_robot.py teleoperate \
+   --robot-path lerobot/configs/robot/aloha.yaml \
+   --robot-overrides \
+      leader_arms.main.port=/dev/tty.usbmodem575E0031751 \
+      follower_arms.main.port=/dev/tty.usbmodem575E0032081
+
+
+

If you don’t have any cameras connected, you can exclude them using Hydra’s syntax:

+
$ python lerobot/scripts/control_robot.py teleoperate \
+   --robot-path lerobot/configs/robot/aloha.yaml \
+   --robot-overrides '~cameras'
+
+
+
+
+
+

Recording Data Episodes

+

The system supports episode-based data collection, where episodes are time-bounded sequences of robot actions.

+
    +
  1. Control the recording flow with these arguments:

    +
      +
    • –warmup-time-s: Number of seconds for device warmup (default: 10s)
    • +
    • –episode-time-s: Number of seconds per episode (default: 60s)
    • +
    • –reset-time-s: Time for resetting after each episode (default: 60s)
    • +
    • –num-episodes: Number of episodes to record (default: 50)
    • +
    +

    Example:

    +
    $ python lerobot/scripts/control_robot.py record \
    +   --robot-path lerobot/configs/robot/aloha.yaml \
    +   --fps 30 \
    +   --root data \
    +   --repo-id ${HF_USER}/aloha_test \
    +   --tags tutorial \
    +   --warmup-time-s 5 \
    +   --episode-time-s 30 \
    +   --reset-time-s 30 \
    +   --num-episodes 2
    +
    +
    +
  2. +
+
+

Note

+
    +
  1. The –num-episodes defines the total number of episodes to be collected. +Therefore it will check the existing output directories for any previously recorded episodes and will start recording from the last recorded episode.
  2. +
  3. The recorded data is pushed to hugging face hub by default you can set this false by using –push_to_hub 0.
  4. +
+
+
+

Note

+
    +
  1. To push your dataset to Hugging Face’s Hub, log in with a write-access token:

    +
    $ huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
    +
    +
    +
  2. +
  3. Set your Hugging Face username as a variable for ease:

    +
    $ HF_USER=$(huggingface-cli whoami | head -n 1)
    +
    +
    +
  4. +
+
+
+
+

Visualizing Datasets

+

To visualize all the episodes recorded in your dataset, run:

+
$ python lerobot/scripts/visualize_dataset_html.py \
+   --root data \
+   --repo-id ${HF_USER}/aloha_test
+
+
+

To visualize a single dataset episode from the Hugging Face Hub:

+
$ python lerobot/scripts/visualize_dataset.py \
+   --repo-id ${HF_USER}/aloha_static_block_pickup \
+   --episode-index 0
+
+
+

To visualize a single dataset episode stored locally:

+
$ DATA_DIR='./my_local_data_dir' python lerobot/scripts/visualize_dataset.py \
+   --repo-id TrossenRoboticsCommunity/aloha_static_block_pickup \
+   --episode-index 0
+
+
+
+
+

Replay Recorded Episodes

+

Replaying episodes allows you to test the repeatability of the robot’s actions. +To replay the first episode of your recorded dataset:

+
$ python lerobot/scripts/control_robot.py replay \
+   --robot-path lerobot/configs/robot/aloha.yaml \
+   --fps 30 \
+   --root data \
+   --repo-id ${HF_USER}/aloha_test \
+   --episode 0
+
+
+
+

Tip

+

Use different –fps values to adjust the frequency of the robot actions.

+
+
+
+

Training

+

To train a policy for controlling your robot, use the following command:

+
$ DATA_DIR=data python lerobot/scripts/train.py \
+   dataset_repo_id=${HF_USER}/aloha_test \
+   policy=act_aloha_real \
+   env=aloha_real \
+   hydra.run.dir=outputs/train/act_aloha_test \
+   hydra.job.name=act_aloha_test \
+   device=cuda \
+   wandb.enable=false
+
+
+
+

Note

+

The arguments are explained below:

+
    +
  1. We provided the dataset with dataset_repo_id=${HF_USER}/aloha_test.
  2. +
  3. The policy is specified with policy=act_aloha_real. +This configuration is loaded from lerobot/configs/policy/act_aloha_real.yaml.
  4. +
  5. The environment is set with env=aloha_real. +This configuration is loaded from lerobot/configs/env/aloha_real.yaml.
  6. +
  7. The device is set to cuda to utilize an NVIDIA GPU for training.
  8. +
  9. wandb.enable=true is used for visualizing training plots via [Weights and Biases](https://docs.wandb.ai/quickstart). +Ensure you are logged in by running wandb login.
  10. +
+
+
+
+

Upload Policy Checkpoints

+

Once training is complete, upload the latest checkpoint with:

+
$ huggingface-cli upload ${HF_USER}/act_aloha_test \
+   outputs/train/act_aloha_test/checkpoints/last/pretrained_model
+
+
+

To upload intermediate checkpoints:

+
$ CKPT=010000
+$ huggingface-cli upload ${HF_USER}/act_aloha_test_${CKPT} \
+   outputs/train/act_aloha_test/checkpoints/${CKPT}/pretrained_model
+
+
+
+
+

Evaluation

+

To control your robot with the trained policy and record evaluation episodes:

+
$ python lerobot/scripts/control_robot.py record \
+   --robot-path lerobot/configs/robot/aloha.yaml \
+   --fps 30 \
+   --root data \
+   --repo-id ${HF_USER}/eval_aloha_test \
+   --tags tutorial eval \
+   --warmup-time-s 5 \
+   --episode-time-s 30 \
+   --reset-time-s 30 \
+   --num-episodes 10 \
+   -p outputs/train/act_aloha_test/checkpoints/last/pretrained_model
+
+
+

This command is similar to the one used for recording training datasets, with a couple of key changes:

+
    +
  1. The -p argument is now included, which specifies the path to your policy checkpoint (e.g., -p outputs/train/eval_aloha_test/checkpoints/last/pretrained_model). +You can also refer to the model repository on Hugging Face if you have uploaded a model checkpoint there (e.g., -p ${HF_USER}/act_aloha_test).
  2. +
  3. The dataset name begins with eval, reflecting that you are running inference (e.g., –repo-id ${HF_USER}/eval_aloha_test).
  4. +
+

You can visualize the evaluation dataset afterward using:

+
$ python lerobot/scripts/visualize_dataset.py \
+   --root data \
+   --repo-id ${HF_USER}/eval_aloha_test
+
+
+
+
+

Trossen Robotics Community

+
+

Pretrained Models

+

You can download pretrained models from the Trossen Robotics Community on Hugging Face and use them for evaluation purposes. +To run evaluation on the pretrained models, use the following command:

+
$ python lerobot/scripts/control_robot.py record \
+  --robot-path lerobot/configs/robot/aloha.yaml \
+  --fps 30 \
+  --root data \
+  --repo-id ${HF_USER}/eval_aloha_test \
+  --tags tutorial eval \
+  --warmup-time-s 5 \
+  --episode-time-s 30 \
+  --reset-time-s 30 \
+  --num-episodes 10 \
+  -p ${HF_USER}/act_aloha_test
+
+
+
+
+

Datasets for Training and Augmentation

+

Datasets can also be downloaded from the Trossen Robotics Community on Hugging Face for further training or data augmentation. +These datasets can be used with your preferred network architectures. +Instructions for downloading and using these datasets can be found at the following link:

+

Dataset Download and Upload Instructions

+

Trossen Robotics Community

+
+
+
+

Troubleshooting

+
+

Warning

+

If you encounter issues, follow these troubleshooting steps:

+
+
    +
  1. OpenCV Installation Issues (Linux)

    +

    If you encounter OpenCV installation issues, uninstall it via pip and reinstall using Conda:

    +
    $ pip uninstall opencv-python
    +$ conda install -c conda-forge opencv=4.10.0
    +
    +
    +
  2. +
  3. FFmpeg Encoding Error (`unknown encoder libsvtav1`)

    +

    Install FFmpeg with libsvtav1 support via Conda-Forge or Homebrew:

    +
    $ conda install -c conda-forge ffmpeg
    +
    +
    +

    Or:

    +
    $ brew install ffmpeg
    +
    +
    +
  4. +
  5. Arrow Keys Not Working During Data Recording (Linux)

    +

    Ensure that the $DISPLAY environment variable is set correctly.

    +
  6. +
  7. Checkout LeRobot Documentation for further help and details.

    +

    LeRobot Github

    +
  8. +
+
+
+ + +
+