-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit 90bc1b9
Showing
132 changed files
with
22,003 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
*.tar | ||
*.pyc | ||
*.pt | ||
*.ply | ||
**/*.egg-info/ | ||
*.egg | ||
.DS_Store | ||
.TimeRecord | ||
.vscode | ||
*.pth | ||
build | ||
dist | ||
|
||
|
||
*.swp | ||
*.swo | ||
*.swn | ||
*.r3d | ||
*.npy | ||
|
||
*.TimeRecord | ||
|
||
*.pkl | ||
*.mp4 | ||
data | ||
checkpoints | ||
|
||
evaluation/output | ||
data_example | ||
*.zip |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
[submodule "third_party/segment-anything-2"] | ||
path = third_party/segment-anything-2 | ||
url = https://github.com/facebookresearch/segment-anything-2.git | ||
[submodule "third_party/GroundingDINO"] | ||
path = third_party/GroundingDINO | ||
url = https://github.com/IDEA-Research/GroundingDINO.git | ||
[submodule "third_party/recognize-anything"] | ||
path = third_party/recognize-anything | ||
url = https://github.com/xinyu1205/recognize-anything.git | ||
[submodule "third_party/LightGlue"] | ||
path = third_party/LightGlue | ||
url = https://github.com/cvg/LightGlue.git | ||
[submodule "third_party/pytorch3d"] | ||
path = third_party/pytorch3d | ||
url = https://github.com/facebookresearch/pytorch3d.git | ||
[submodule "third_party/DROID-SLAM"] | ||
path = third_party/DROID-SLAM | ||
url = https://github.com/princeton-vl/DROID-SLAM.git |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,102 @@ | ||
# DovGS | ||
## Dynamic Open-Vocabulary 3D Scene Graphs for Long-term Language-Guided Mobile Manipulation | ||
|
||
<!-- ### 📢 News | ||
- 🎉 **2024-10-01**: Accepted by **T-RO '24**! | ||
- 🚀 **2024-07-02**: Conditionally accepted. --> | ||
|
||
|
||
## 1 Introduction | ||
|
||
**DovSG** constructs a Dynamic 3D Scene Graph and leverages task decomposition with large language models, enabling localized updates of the 3D scene graphs during interactive exploration. This assists mobile robots in accurately executing long-term tasks, even in scenarios where human modifications to the environment are present. | ||
|
||
|
||
**Contributors**: [Zhijie Yan](https://bjhyzj.github.io), [Shufei Li](https://scholar.google.com/citations?user=CpCQmkwAAAAJ&hl=en), [Zuoxu Wang](https://scholar.google.com/citations?user=kja7k5MAAAAJ&hl=en), [Lixiu Wu](https://scholar.google.com/citations?user=ziAzfCoAAAAJ&hl=en), Han Wang, Jun Zhu, Lijiang Chen, Jihong Liu | ||
|
||
|
||
<div align="center"> | ||
<img src="docs/img/framework.png" width = 100% > | ||
</div> | ||
|
||
|
||
### 1.1 Our paper | ||
|
||
Our paper is now available on **arXiv**: [Dynamic Open-Vocabulary 3D Scene Graphs for Long-term Language-Guided Mobile Manipulation](https://arxiv.org/pdf/2410.11989). | ||
|
||
If our code is used in your project, please cite our paper following the bibtex below: | ||
|
||
``` | ||
@article{zheng2022fast, | ||
title={FAST-LIVO: Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry}, | ||
author={Zheng, Chunran and Zhu, Qingyan and Xu, Wei and Liu, Xiyuan and Guo, Qizhi and Zhang, Fu}, | ||
journal={arXiv preprint arXiv:2203.00893}, | ||
year={2022} | ||
} | ||
``` | ||
|
||
|
||
### 1.2 Our demo | ||
|
||
Our accompanying demo are now available on [**YouTube**](https://www.youtube.com/watch?v=xmUCHzE6EYc) and [**Project Page**](https://bjhyzj.github.io/dovsg-web/). | ||
|
||
<div align="center"> | ||
<a href="https://www.youtube.com/watch?v=xmUCHzE6EYc" target="_blank"></a> | ||
</div> | ||
|
||
|
||
|
||
## 2 Prerequisited | ||
- We have set up all the necessary environments on a Lenovo Y9000K laptop running Ubuntu 20.04, equipped with an NVIDIA RTX 4090 GPU with 16GB of VRAM. | ||
|
||
- We used a real-world setup with a <a herf="https://www.cn.ufactory.cc/xarm">UFACTORY xARM6</a> robotic arm on an <a herf="https://www.agilex.ai/chassis/6">Agilex Ranger Mini 3</a> mobile base, equipped with a <a herf="https://www.intelrealsense.com/depth-camera-d455/">RealSense D455</a> camera for perception and a basket for item transport. | ||
|
||
|
||
### 2.1 Ubuntu and ROS | ||
Ubuntu 20.04. [ROS Installation](http://wiki.ros.org/ROS/Installation). | ||
|
||
### 2.1 Environment Setup | ||
- Install **DROID-SLAM** environment for scan the room: [install_droidslam.md](docs/install_droidslam.md). | ||
|
||
- Install **DovSG** environment for scan the room: [install_dovsg.md](docs/install_dovsg.md). | ||
|
||
- Download checkpoints: [down_checkpoints.md](docs/down_checkpoints.md) | ||
|
||
## 3 Run DovSG | ||
|
||
### 3.1 Run our demo | ||
You can directly download the pre-recorded scenes we provided from <a href="https://drive.google.com/drive/folders/13v5QOrqjxye__kJwDIuD7kTdeSSNfR5x?usp=sharing">Google Cloud</a>. Please place them in the project's root directory, specifically in `DovSG/data_example`, and set the tags to `your_name_of_scene`, such as `room1`. | ||
|
||
```bash | ||
python demo.py --tags room1 preprocess --debug --task_scene_change_level "Minor Adjustment" --task_description "Please move the red pepper to the plate, then move the green pepper to plate." | ||
``` | ||
|
||
### 3.2 Run on real world workstation | ||
You need to refer to <a herf="https://github.com/agilexrobotics/ranger_ros">here</a> to configure the aglix ranger mini. | ||
|
||
3.2.1 You should Scanning the room for memory | ||
```bash | ||
python demo.py --tags `your_name_of_scene` --scanning_room --preprocess --task_scene_change_level your_task_scene_change_level --task_description your_task_description | ||
``` | ||
|
||
3.2.2 In one terminal run the hardcode. | ||
```bash | ||
cd hardcode | ||
source ~/agilex_ws/devel/setup.bash | ||
rosrun ranger_bringup bringup_can2usb.bash | ||
roslaunch ranger_bringup ranger_mini_v2.launch | ||
|
||
# You need to replace the port with your own. | ||
python server.py | ||
``` | ||
|
||
3.2.3 In another terminal run the Navigation and Manipulation Module. | ||
```bash | ||
python demo.py --tags `your_name_of_scene` --preprocess --task_scene_change_level your_task_scene_change_level --task_description your_task_description | ||
``` | ||
|
||
|
||
## Reference | ||
- Ok-Robot: [https://ok-robot.github.io/](https://ok-robot.github.io/) | ||
- RoboEXP: [https://github.com/Jianghanxiao/RoboEXP]: https://github.com/Jianghanxiao/RoboEXP | ||
- ConceptGraphs: [https://github.com/concept-graphs/concept-graphs](https://github.com/concept-graphs/concept-graphs) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
__pycache__/ | ||
.idea/ | ||
|
||
# Dataset files. | ||
datasets/7scenes_* | ||
datasets/12scenes_* | ||
datasets/Cambridge_* | ||
datasets/wayspots_* | ||
datasets/pgt_* | ||
datasets/visloc_pseudo_gt_limitations | ||
|
||
# Outputs. | ||
output/ |
Oops, something went wrong.