Notice: we're not actively documenting more of our pre-processing pipeline for external users, we suggest instead using our provided data which has already been pre-processed
This outlines how to collect and process data for a single scene. See here for how the dataset is organized. The steps here are split across code in two repos.
- spartan handles the raw data collection and tsdf fusion.
- pdc handles change detection and rendering.
The quick version of raw data collection currently is:
- Start Kuka, run Position control
kip
(shortcut for Kuka Iiwa Procman) then in procman:- Start ROS script (check that openni driver looks happy)
- Run Kuka drivers
- Check that pointcloud / sensor data in RVIZ looks OK
- New terminal: prepare to collect logs via navigate to fusion server scripts:
use_ros && use_spartan
cd ~/spartan/src/catkin_projects/fusion_server/scripts
- Collect many raw logs, for each:
- Move objects to desired position
./capture_scene_client.py
- This will create a new folder with the current date (e.g.
2018-04-07-20-23-56
) and theraw/fusion.bag
file as in the folder structure above.
- Start Kuka, run Position control
kip
(shortcut for Kuka Iiwa Procman) then in procman:- Start ROS script (check that openni driver looks happy)
- Run Kuka drivers
- Check that pointcloud / sensor data in RVIZ looks OK
- Run Director
- In Director terminal (f8), enter:
graspSupervisor.testInteractionLoop()
This is done in spartan
. Navigate to spartan/src/catkin_projects/fusion_server/scripts
. With log_dir
set to the directory of your log, i.e. the full path to 2018-04-07-20-23-56
run
./extract_and_fuse_single_scene.py <full_path_to_log_folder>
This will
- Extract all the rgb and depth images into
processed/images
- Produces
processed/images/camera_info.yaml
which contains the camera intrinsic information. - Produces
processed/images/pose_data.yaml
which contains the camera pose corresponding to each image. - Run tsdf fusion
- Convert the tsdf fusion to a mesh and save it as
processed/fusion_mesh.ply
- Downsample the images in
processed/images
and only keep those with poses that are sufficiently different.
This is done in pytorch-dense-correspondence
. In pdc
use_pytorch_dense_correspondence
use_director
run_change_detection --data_dir <full_path_to_log_folder>/processed
- This will run change detection and render new depth images for the full scene and the cropped scene. The data that is produced by this step is
processed/rendered_images/000000_depth_cropped.png
processed/image_masks/000000_mask.png
processed/image_masks/000000_mask_visible.png
render_depth_images.py --data_dir <full_path_to_log_folder>/processed
- This will render depth images against the full tsdf reconstruction, not the cropped one.
- Produces the file
processed/rendered_images/000000_depth.png