Skip to content

Commit

Permalink
Merge pull request #16 from opendilab/gl-dev
Browse files Browse the repository at this point in the history
update 0.3.3
  • Loading branch information
RobinC94 authored Jun 6, 2022
2 parents 33a3ba0 + 230022b commit 6f8632e
Show file tree
Hide file tree
Showing 16 changed files with 1,504 additions and 85 deletions.
10 changes: 8 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -1402,10 +1402,16 @@ htmlcov
*.lock
.coverage*

project_test.py
## DI-engine
*total_config.py
openaigym*

## Carla
*.csv
*.avi

.vscode
project_test.py
*episode_metainfo.json
*measurements.lmdb
*index.txt
openaigym*
6 changes: 6 additions & 0 deletions CHANGELOG
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
## v0.3.3 (2022.6.5)
- Update readme, reorganize info
- Add MetaDriveTrajEnv and doc
- Add utils for MetaDrive
- Modify utils for macro env

## v0.3.2 (2022.4.24)
- Update banner logo
- Update to DI-engine 0.3, modify env properties
Expand Down
167 changes: 140 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,44 +2,73 @@

<img src="./docs/figs/di-drive_banner.png" alt="icon"/>

Updated on 2022.4.16 DI-drive-v0.3.2 (beta)

DI-drive - Decision Intelligence Platform for Autonomous Driving simulation.

DI-drive is application platform under [OpenDILab](http://opendilab.org/)

![icon](./docs/figs/big_cam_auto.png)

## Introduction

**DI-drive** is an open-source application platform under **OpenDILab**. DI-drive applies different simulator/datasets/cases in **Decision Intelligence** Training & Testing for **Autonomous Driving** Policy.
**DI-drive** is an open-source Decision Intelligence Platform for Autonomous Driving simulation. DI-drive applies different simulators/datasets/cases in **Decision Intelligence** Training & Testing for **Autonomous Driving** Policy.
It aims to

- run Imitation Learning, Reinforcement Learning, GAIL etc. in a single platform and simple unified entry
- apply Decision Intelligence in any parts of driving simulation
- apply Decision Intelligence in any part of the driving simulation
- suit most of the driving simulators input & output
- run designed driving cases and scenarios

and most importantly, to **put these all together!**

**DI-drive** uses [DI-engine](https://github.com/opendilab/DI-engine), a Reinforcement Learning
platform to build most of the running modules and demos. **DI-drive** currently supports [Carla](http://carla.org),
an open-source Autonomous Drining simulator to operate driving simualtion, and [MetaDrive](https://decisionforce.github.io/metadrive/),
a diverse driving scenarios for Generalizable Reinforcement Learning. Users can specify any of them to run in global config under `core`.
an open-source Autonomous Driving simulator to operate driving simulation, and [MetaDrive](https://decisionforce.github.io/metadrive/),
a diverse driving scenarios for Generalizable Reinforcement Learning. DI-drive is an application platform under [OpenDILab](http://opendilab.org/)

![icon](./docs/figs/big_cam_auto.png)
<p align="center"> Visualization of Carla driving in DI-drive </p>

## Outline

* [Installation](#installation)
* [Quick Start](#quick-start)
* [Model Zoo](#model-zoo)
* [Casezoo](#di-drive-casezoo)
* [File Structure](#file-structure)
* [Contributing](#contributing)
* [License](#license)
* [Citation](#citation)

## Installation

**DI-drive** needs to have the following modules installed:
**DI-drive** runs with **Python >= 3.5** and **DI-engine >= 0.3.1** (Pytorch is needed in DI-engine). You can install DI-drive from the source code:

```bash
git clone https://gitlab.bj.sensetime.com/open-XLab/cell/xad.git
cd xad
pip install -e .
```

DI-engine and Pytorch will be installed automatically.

In addition, at least one simulator in Carla and MetaDrive need to be able to run in DI-drive. [MetaDrive](https://decisionforce.github.io/metadrive/) can be easily installed via `pip`.
If [Carla](http://carla.org) server is used for simulation, users need to install 'Carla Python API' in addition. You can use either one of them or both. Make sure to modify the activated simulators in `core.__init__.py` to avoid import error.

- Pytorch
- DI-engine
Please refer to the [installation guide](https://opendilab.github.io/DI-drive/installation/index.html) for details about the installation of **DI-drive**.

[MetaDrive](https://decisionforce.github.io/metadrive/) can be easily installed via `pip`.
If [Carla](http://carla.org) server is used for simulation, users need to install 'Carla Python API' in addition.
Please refer to the [documentation](https://opendilab.github.io/DI-drive/) for details about installation and user guide of **DI-drive**.
We provide IL and RL tutorials, and full guidance for quick run existing policy for beginners.
## Quick Start

Please refer to [FAQ](https://opendilab.github.io/DI-drive/faq/index.html) for frequently asked questions.
### Carla

Users can check the installation of Carla and watch the visualization by running an 'auto' policy in provided town map. You need to start a Carla server first and modify the Carla host and port in `auto_run.py` into yours. Then run:

```bash
cd demo/auto_run
python auto_run.py
```

### MetaDrive

After installation of MetaDrive, you can start an RL training in MetaDrive Macro Environment by running the following code:

```bash
cd demo/metadrive
python macro_env_dqn_train.py.
```

We provide detail guidance for IL and RL experiments in all simulators and quick run of existing policy for beginners in our [documentation](https://opendilab.github.io/DI-drive/). Please refer to it if you have further questions.

## Model Zoo

Expand All @@ -58,15 +87,99 @@ Please refer to [FAQ](https://opendilab.github.io/DI-drive/faq/index.html) for f

## DI-drive Casezoo

**DI-drive Casezoo** is a scenario set for training and testing of Autonomous Driving policy in simulator.
**Casezoo** combines data collected by real vehicles and Shanghai Lingang road license test Scenarios.
**Casezoo** supports both evaluating and training, whick makes the simulation closer to real driving.
**DI-drive Casezoo** is a scenario set for training and testing the Autonomous Driving policy in simulator.
**Casezoo** combines data collected from actual vehicles and Shanghai Lingang road license test Scenarios.
**Casezoo** supports both evaluating and training, which makes the simulation closer to real driving.

Please see [casezoo instruction](docs/casezoo_instruction.md) for details about **Casezoo**.

## Contributing
## File Structure

```
DI-drive
|-- .gitignore
|-- .style.yapf
|-- CHANGELOG
|-- LICENSE
|-- README.md
|-- format.sh
|-- setup.py
|-- core
| |-- data
| | |-- base_collector.py
| | |-- benchmark_dataset_saver.py
| | |-- bev_vae_dataset.py
| | |-- carla_benchmark_collector.py
| | |-- cict_dataset.py
| | |-- cilrs_dataset.py
| | |-- lbc_dataset.py
| | |-- benchmark
| | |-- casezoo
| | |-- srunner
| |-- envs
| | |-- base_drive_env.py
| | |-- drive_env_wrapper.py
| | |-- md_macro_env.py
| | |-- md_traj_env.py
| | |-- scenario_carla_env.py
| | |-- simple_carla_env.py
| |-- eval
| | |-- base_evaluator.py
| | |-- carla_benchmark_evaluator.py
| | |-- serial_evaluator.py
| | |-- single_carla_evaluator.py
| |-- models
| | |-- bev_speed_model.py
| | |-- cilrs_model.py
| | |-- common_model.py
| | |-- lbc_model.py
| | |-- model_wrappers.py
| | |-- mpc_controller.py
| | |-- pid_controller.py
| | |-- vae_model.py
| | |-- vehicle_controller.py
| |-- policy
| | |-- auto_policy.py
| | |-- base_carla_policy.py
| | |-- cilrs_policy.py
| | |-- lbc_policy.py
| |-- simulators
| | |-- base_simulator.py
| | |-- carla_data_provider.py
| | |-- carla_scenario_simulator.py
| | |-- carla_simulator.py
| | |-- fake_simulator.py
| | |-- srunner
| |-- utils
| |-- data_utils
| |-- env_utils
| |-- learner_utils
| |-- model_utils
| |-- others
| |-- planner
| |-- simulator_utils
|-- demo
| |-- auto_run
| |-- cict
| |-- cilrs
| |-- implicit
| |-- latent_rl
| |-- lbc
| |-- metadrive
| |-- simple_rl
|-- docs
| |-- casezoo_instruction.md
| |-- figs
| |-- source
```

## Join and Contribute

We appreciate all contributions to improve DI-drive, both algorithms and system designs. Welcome to OpenDILab community! Scan the QR code and add us on Wechat:

<div align=center><img width="250" height="250" src="./docs/figs/qr.png" alt="qr"/></div>

We appreciate all contributions to improve DI-drive, both algorithms and system designs.
Or you can contact us with [slack](https://opendilab.slack.com/join/shared_invite/zt-v9tmv4fp-nUBAQEH1_Kuyu_q4plBssQ#/shared-invite/email) or email ([email protected]).

## License

Expand Down
2 changes: 1 addition & 1 deletion core/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
__TITLE__ = "DI-drive"
__VERSION__ = "0.3.2"
__VERSION__ = "0.3.3"
__DESCRIPTION__ = "Decision AI Auto-Driving Platform"
__AUTHOR__ = "OpenDILab Contributors"
__AUTHOR_EMAIL__ = "[email protected]"
Expand Down
2 changes: 2 additions & 0 deletions core/envs/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,10 @@

if 'metadrive' in SIMULATORS:
from .md_macro_env import MetaDriveMacroEnv
from .md_traj_env import MetaDriveTrajEnv
env_map.update({
"Macro-v1": 'core.envs.md_macro_env:MetaDriveMacroEnv',
"Traj-v1": 'core.envs.md_traj_env:MetaDriveTrajEnv'
})

for k, v in env_map.items():
Expand Down
13 changes: 6 additions & 7 deletions core/envs/md_macro_env.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,12 @@
from gym import spaces
from collections import defaultdict
from typing import Union, Dict, AnyStr, Tuple, Optional
from gym.envs.registration import register
import logging

from ding.utils import ENV_REGISTRY
from core.utils.simulator_utils.md_utils.discrete_policy import DiscreteMetaAction
from core.utils.simulator_utils.md_utils.agent_manager_utils import MacroAgentManager
from core.utils.simulator_utils.md_utils.engine_utils import initialize_engine, close_engine, \
engine_initialized, set_global_random_seed, MacroBaseEngine
from core.utils.simulator_utils.md_utils.engine_utils import MacroEngine
from core.utils.simulator_utils.md_utils.traffic_manager_utils import TrafficMode

from metadrive.envs.base_env import BaseEnv
Expand All @@ -22,7 +20,7 @@
# from metadrive.manager.traffic_manager import TrafficMode
from metadrive.component.pgblock.first_block import FirstPGBlock
from metadrive.constants import DEFAULT_AGENT, TerminationState
from metadrive.component.vehicle.base_vehicle import BaseVehicle
from metadrive.engine.base_engine import BaseEngine
from metadrive.utils import Config, merge_dicts, get_np_random, clip

from metadrive.envs.base_env import BASE_DEFAULT_CONFIG
Expand Down Expand Up @@ -142,7 +140,7 @@ def __init__(self, config: dict = None) -> None:
#self.action_space = self.action_type.space()

# lazy initialization, create the main vehicle in the lazy_init() func
self.engine: Optional[MacroBaseEngine] = None
self.engine: Optional[BaseEngine] = None
self._top_down_renderer = None
self.episode_steps = 0
# self.current_seed = None
Expand Down Expand Up @@ -446,9 +444,10 @@ def lazy_init(self):
:return: None
"""
# It is the true init() func to create the main vehicle and its module, to avoid incompatible with ray
if engine_initialized():
if MacroEngine.singleton is not None:
return
self.engine = initialize_engine(self.config)
MacroEngine.singleton = MacroEngine(self.config)
self.engine = MacroEngine.singleton
# engine setup
self.setup_engine()
# other optional initialization
Expand Down
Loading

0 comments on commit 6f8632e

Please sign in to comment.