Particle-based Instance-aware Semantic Occupancy Mapping in Dynamic Environments
Gang Chen, Zhaoying Wang, Wei Dong, Javier Alonso-Mora
This repository contains the code for paper "Particle-based Instance-aware Semantic Occupancy Mapping in Dynamic Environments". This paper is still under review. This map is an instance-aware ego-centric semantic occupancy map for dynamic environments with objects like vehicles and pedestrians. Static environment is also suitable.
The C++ source file for mapping is in the include/
folder and a ROS1 node example that uses the map is given in src/mapping.cpp
The following gifs shows a mapping result using the Virtual Kitti 2 dataset (left) and using a ZED2 camera in the real world.
Tested Environment: Ubuntu 20.04 + ROS Noetic
-
Our code uses yaml-cpp. Follow instructions in yaml-cpp to install it.
-
Then download and compile the mapping code by
mkdir -p semantic_map_ws/src
cd semantic_map_ws/src
git clone [email protected]:g-ch/mask_kpts_msgs.git
git clone --recursive [email protected]:tud-amr/semantic_dsp_map.git
cd ..
catkin build
Download a ros data bag The bag contains downsampled depth image, camera pose and a message in "mask_kpts_msgs" form containing segmentation and transformation estimation results. The data is collected in Delft with a ZED2 camera.
Launch the test by
roslaunch semantic_dsp_map zed2.launch
rosbag play zed2_clip.bag # Add -r 2 to accelerate playing
Download a ros data bag. The bag contains depth image, rgb image, and camera pose from Virtual Kitti 2 dataset, and a message in mask_kpts_msgs form containing segmentation and transformation estimation results.
NOTE: Modify include/settings/settings.h
by changing #define SETTING 3
to #define SETTING 2
and recompile by running catkin build
to use camera intrinsics, etc. of Virtual Kitti 2 dataset.
Launch the test by
roslaunch semantic_dsp_map virtual_kitti2.launch
rosbag play clip1.bag
-
Backup ros bag Baidu Yun download link. Code: 86if.
-
To Visualize the input segmentation and instance color image, particles image and show cout messages, Modify
include/settings/settings.h
by changingVERBOSE_MODE
from 0 to 1, and then recompile by runningcatkin build
. This image shows the segmentation and instance image, particle weight image, and particle number image from left to right.
Our map handles noise from semantic and instance (Panoptic) segmentation image, depth image, pose, and object transfromation estimation to build an instance-aware semantic map. There are three modes that can be choosen: ZED2 Mode (recommended), Superpoint Mode, and Static Mode. See below for difference and how to use each of them.
SDK of getting depth image, 3D BBOX, instance masks, and pose from ZED2 camera is used to realize real-time mapping. An additional semantic segmentation network can also be used to give semantic labels to all objects.
Check ZED2 Mode Instructions for details.
This mode uses Superpoints and Superglue to match feature points from adjacent images. Matched feature points on each object (like cars) and the depth image are used to esimate a rough transformation matrix instead of using 3D BBOXes.
Check Superpoint Mode Instructions for details.
This mode doesn't consider instances or dynamic objects. It receives depth image, semantic segmentation image and pose to make a local and global semantic map.
Check Static Mode Instructions for details.
If you found our work useful, please cite the following.
@misc{chen2024particlebasedinstanceawaresemanticoccupancy,
title={Particle-based Instance-aware Semantic Occupancy Mapping in Dynamic Environments},
author={Gang Chen and Zhaoying Wang and Wei Dong and Javier Alonso-Mora},
year={2024},
eprint={2409.11975},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2409.11975},
}
@article{chen2023continuous,
title={Continuous occupancy mapping in dynamic environments using particles},
author={Chen, Gang and Dong, Wei and Peng, Peng and Alonso-Mora, Javier and Zhu, Xiangyang},
journal={IEEE Transactions on Robotics},
year={2023},
publisher={IEEE}
}
Check here
Apache-2.0