Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unusable stack, plenty of errors, and guide for deployment. #193

Open
ghost opened this issue Oct 30, 2023 · 2 comments
Open

Unusable stack, plenty of errors, and guide for deployment. #193

ghost opened this issue Oct 30, 2023 · 2 comments

Comments

@ghost
Copy link

ghost commented Oct 30, 2023

Hi all,

Firstly, thanks for the author for providing us access to your work. Generally speaking, after some hours working with it I managed to get it to run in custom gazebo simulations and 'real robots'. That said, I would like to put some observations in the issues as guide for future work. I'm working with robotics for quite a while and eventually found Kimera as a VIO solution, among other things. Specially considering that they published their papers and won awards for it, I decided to test the stack.

General Thoughts and Directions

I got myself very frustrated due to the magnitude this project reached and the quality of their solution in overall. My general thoughts about the library are:

  • The architecture of the system is not transparent and clean as done in other ROS packages, specially considering that they provide a server that receives data from a data-provider structure. This architecture seems more like an use case they have instead of providing a clean and understandable way of using the system by the actual robotics community surrounding ROS. What makes me think that are the issues in the abandoned repository and experience trying to deploying it.
  • It is not going to work as an out-of-the box solution if you are using the wrong system/ROS and it does not work as a 'library', but rather as a black-box coupled with several things that you might not need.
  • In my adventures getting it to work, I've faced several issues as all of you who are posting here, and my general conclusions are that the stack can serve as an example and possible might help you as a baseline.

If you are having trouble to understand it, it can be break it into the following systems:

  1. A coupled Mesh reconstruction that does not has much utility for most use-cases.
  2. A visual-odometry system, highly coupled with a pose-graph optimization that is a wrap-up with several calls to the GTSam library created at Caltech https://gtsam.org/.
  3. If you use semantics from the other repository, you have a 3D reconstruction system based on the VoxBlox library from ETH Zurich https://github.com/ethz-asl/voxblox.

Assuming that you know ROS and work with robotics, to have a cleaner system deployment for your stack I would consider:

  1. Deploy a minimal working simulation or stack that provides you an odometry source. You can use any VIO and hardware for that.
  2. Deploy your own pose-graph optimization based on GTSam that gets features from the environment and provides you corrections based-on them. You can use anything as features in its factors, it doesn't matter, and there are several tutorials on how to do it!
  3. Add loop-closure in your GTSam integration if you need it! You can find several tutorials on it too!
  4. You can create a map, if needed, in a separate system from your pose-graph with the measurements you stored.
  5. You can share GTSam structures among robots if you need it, when robots are in communication range, as this is a common use case and shouldn't be a significant problem for academic purposes!

This is going to provide you an architecture composed of three ROS nodes: CleanVIO (Odometry), PoseCorrectionGTSam (Pose estimation correction), MapBuilder (Provides you a clean representation for planning).

I found that all other stuff/architecture provided the software is not very usable for academic purposes, as some of you just want a VIO solution that integrates easily with robots in ROS and allows you to build maps for planning.

Make it Work Out-of-the-box

To make it work, without having the headache of trying to port this to ROS2 or Noetic, in any stack, deployment, consider the following:

  1. Install docker.
  2. Create an image using ubuntu 18,
  3. Create a container from it and install ROS Melodic
  4. Clone Kimera_VIO_ROS repository in the container, install it and its dependencies. This step is not going to work with the Dockerfile they provide, because the dependencies are in the wrong version, so you need to fix that! (google for it)
  5. When running this container you need to expose its network to the host OS. To do this run it with the flag "--network host" and also expose your USB devices as needed for cameras and etc with the flags "-v /dev/bus/usb:/dev/bus/usb" or something like that (look at google for it)
  6. Let this container run in your ROS stack.

In your host OS:

  1. Install any ROS you want (to be sure it is going to work try using distros higher than the one in your container with ubuntu 18, in my case I use Noetic).
  2. Create a launch file for your robot that specifies a minimal deployment and provides you a transformation tree for your camera or robot, it can be for a real robot or a robot in gazebo.
  3. Ensure that you have proper stereo images without IR patterns.
  4. In the host machine run a stack for your robot (ensure that the TFs are correct! this is an important step!).
  5. In the running container, run their realsense_IR launch file they provide (if you are using a realsense camera) or the one you created for your own camera.

Sensor:

  1. Make sure you are have "enable_gyro" and "enable_accel" enabled for RealSense cameras.
  2. Make sure that you have "unite_imu_method" set as copy of interpolation if you are using RealSense cameras.

These directions should work flawlessly, because you are eliminating error factors provided by changes in libraries form distributions different from what they used when they developed this, and also if you have the right transformation trees in your minimal stack in your host machine. The stack in the docker container + host machine, should output your with some topics and transforms, such as odometry and etc.

I found that the output odometry seems to be actually a RAW VIO if you compare with the odometry edges from the factor graph and that the "optimized_odometry" topic is not being published. To solve that:

  • Simple get the last pose from the "/kimera_vio_ros/optimized_trajectory" topic and use that in your transform.

Now, if you manage to get this working following these steps, you are going to face:

  • Several errors, warnings, and maybe system freezes during runtime, probably because of improper error handling in the code-base and also features that you don't need but are going to be running in the stack, such as the mesh reconstruction.
  • The stack gets slower with more measurements and I had to run it in parallel mode to avoid that.

However, for most scenarios it should allow you to perform experiments such as calculating drift errors and getting some estimates to serve as a baseline.

Here is a deployment example: youtube
Here is a deployment with 3D reconstruction in the custom stack: youtube

@ksvbka
Copy link

ksvbka commented Nov 7, 2023

hi @ribeiro-silva-alysson
Nice work. Could you please share your implement to get result as your youtube?

@tianyilim
Copy link

Yes, would some one have a working Dockerfile?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants