Skip to content

A modular high-level library to train embodied AI agents across a variety of tasks and environments.

License

Notifications You must be signed in to change notification settings

SocioProphet/habitat-lab2

 
 

Repository files navigation

CircleCI codecov GitHub license GitHub release (latest by date) Supports Habitat_Sim Python 3.9 pre-commit Code style: black Imports: isort Twitter Follow

Habitat-Lab

Habitat-Lab is a modular high-level library for end-to-end development in embodied AI -- defining embodied AI tasks (e.g. navigation, rearrangement, instruction following, question answering), configuring embodied agents (physical form, sensors, capabilities), training these agents (via imitation or reinforcement learning, or no learning at all as in SensePlanAct pipelines), and benchmarking their performance on the defined tasks using standard metrics.

Habitat-Lab uses Habitat-Sim as the core simulator. For documentation refer here.

Habitat Demo


Table of contents

  1. Citing Habitat
  2. Installation
  3. Testing
  4. Documentation
  5. Docker Setup
  6. Datasets
  7. Baselines
  8. License

Citing Habitat

If you use the Habitat platform in your research, please cite the Habitat 1.0 and Habitat 2.0 papers:

@inproceedings{szot2021habitat,
  title     =     {Habitat 2.0: Training Home Assistants to Rearrange their Habitat},
  author    =     {Andrew Szot and Alex Clegg and Eric Undersander and Erik Wijmans and Yili Zhao and John Turner and Noah Maestre and Mustafa Mukadam and Devendra Chaplot and Oleksandr Maksymets and Aaron Gokaslan and Vladimir Vondrus and Sameer Dharur and Franziska Meier and Wojciech Galuba and Angel Chang and Zsolt Kira and Vladlen Koltun and Jitendra Malik and Manolis Savva and Dhruv Batra},
  booktitle =     {Advances in Neural Information Processing Systems (NeurIPS)},
  year      =     {2021}
}

@inproceedings{habitat19iccv,
  title     =     {Habitat: {A} {P}latform for {E}mbodied {AI} {R}esearch},
  author    =     {Manolis Savva and Abhishek Kadian and Oleksandr Maksymets and Yili Zhao and Erik Wijmans and Bhavana Jain and Julian Straub and Jia Liu and Vladlen Koltun and Jitendra Malik and Devi Parikh and Dhruv Batra},
  booktitle =     {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      =     {2019}
}

Installation

  1. Preparing conda env

    Assuming you have conda installed, let's prepare a conda env:

    # We require python>=3.9 and cmake>=3.14
    conda create -n habitat python=3.9 cmake=3.14.0
    conda activate habitat
  2. conda install habitat-sim

    • To install habitat-sim with bullet physics
      conda install habitat-sim withbullet -c conda-forge -c aihabitat
      
      See Habitat-Sim's installation instructions for more details.
  3. pip install habitat-lab stable version.

    git clone --branch stable https://github.com/facebookresearch/habitat-lab.git
    cd habitat-lab
    pip install -e habitat-lab  # install habitat_lab
  4. Install habitat-baselines.

    The command above will install only core of Habitat-Lab. To include habitat_baselines along with all additional requirements, use the command below after installing habitat-lab:

    pip install -e habitat-baselines  # install habitat_baselines

Testing

  1. Let's download some 3D assets using Habitat-Sim's python data download utility:

    • Download (testing) 3D scenes:

      python -m habitat_sim.utils.datasets_download --uids habitat_test_scenes --data-path data/

      Note that these testing scenes do not provide semantic annotations.

    • Download point-goal navigation episodes for the test scenes:

      python -m habitat_sim.utils.datasets_download --uids habitat_test_pointnav_dataset --data-path data/
  2. Non-interactive testing: Test the Pick task: Run the example pick task script

    python examples/example.py

    which uses habitat-lab/habitat/config/benchmark/rearrange/pick.yaml for configuration of task and agent. The script roughly does this:

    import gym
    import habitat.gym
    
    # Load embodied AI task (RearrangePick) and a pre-specified virtual robot
    env = gym.make("HabitatRenderPick-v0")
    observations = env.reset()
    
    terminal = False
    
    # Step through environment with random actions
    while not terminal:
        observations, reward, terminal, info = env.step(env.action_space.sample())

    To modify some of the configurations of the environment, you can also use the habitat.gym.make_gym_from_config method that allows you to create a habitat environment using a configuration.

    config = habitat.get_config(
      "benchmark/rearrange/pick.yaml",
      overrides=["habitat.environment.max_episode_steps=20"]
    )
    env = habitat.gym.make_gym_from_config(config)

    If you want to know more about what the different configuration keys overrides do, you can use this reference.

    See examples/register_new_sensors_and_measures.py for an example of how to extend habitat-lab from outside the source code.

  3. Interactive testing: Using you keyboard and mouse to control a Fetch robot in a ReplicaCAD environment:

    # Pygame for interactive visualization, pybullet for inverse kinematics
    pip install pygame==2.0.1 pybullet==3.0.4
    
    # Interactive play script
    python examples/interactive_play.py --never-end

    Use I/J/K/L keys to move the robot base forward/left/backward/right and W/A/S/D to move the arm end-effector forward/left/backward/right and E/Q to move the arm up/down. The arm can be difficult to control via end-effector control. More details in documentation. Try to move the base and the arm to touch the red bowl on the table. Have fun!

    Note: Interactive testing currently fails on Ubuntu 20.04 with an error: X Error of failed request: BadAccess (attempt to access private resource denied). We are working on fixing this, and will update instructions once we have a fix. The script works without errors on MacOS.

Debugging an environment issue

Our vectorized environments are very fast, but they are not very verbose. When using VectorEnv some errors may be silenced, resulting in process hanging or multiprocessing errors that are hard to interpret. We recommend setting the environment variable HABITAT_ENV_DEBUG to 1 when debugging (export HABITAT_ENV_DEBUG=1) as this will use the slower, but more verbose ThreadedVectorEnv class. Do not forget to reset HABITAT_ENV_DEBUG (unset HABITAT_ENV_DEBUG) when you are done debugging since VectorEnv is much faster than ThreadedVectorEnv.

Documentation

Browse the online Habitat-Lab documentation and the extensive tutorial on how to train your agents with Habitat. For Habitat 2.0, use this quickstart guide.

Docker Setup

We provide docker containers for Habitat, updated approximately once per year for the Habitat Challenge. This works on machines with an NVIDIA GPU and requires users to install nvidia-docker. To setup the habitat stack using docker follow the below steps:

  1. Pull the habitat docker image: docker pull fairembodied/habitat-challenge:testing_2022_habitat_base_docker

  2. Start an interactive bash session inside the habitat docker: docker run --runtime=nvidia -it fairembodied/habitat-challenge:testing_2022_habitat_base_docker

  3. Activate the habitat conda environment: conda init; source ~/.bashrc; source activate habitat

  4. Run the testing scripts as above: cd habitat-lab; python examples/example.py. This should print out an output like:

    Agent acting inside environment.
    Episode finished after 200 steps.

Questions?

Can't find the answer to your question? Try asking the developers and community on our Discussions forum.

Datasets

Common task and episode datasets used with Habitat-Lab.

Baselines

Habitat-Lab includes reinforcement learning (via PPO) baselines. For running PPO training on sample data and more details refer habitat_baselines/README.md.

ROS-X-Habitat

ROS-X-Habitat (https://github.com/ericchen321/ros_x_habitat) is a framework that bridges the AI Habitat platform (Habitat Lab + Habitat Sim) with other robotics resources via ROS. Compared with Habitat-PyRobot, ROS-X-Habitat places emphasis on 1) leveraging Habitat Sim v2's physics-based simulation capability and 2) allowing roboticists to access simulation assets from ROS. The work has also been made public as a paper.

Note that ROS-X-Habitat was developed, and is maintained by the Lab for Computational Intelligence at UBC; it has not yet been officially supported by the Habitat Lab team. Please refer to the framework's repository for docs and discussions.

License

Habitat-Lab is MIT licensed. See the LICENSE file for details.

The trained models and the task datasets are considered data derived from the correspondent scene datasets.

About

A modular high-level library to train embodied AI agents across a variety of tasks and environments.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Other 0.3%