Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

estimated motion not presenting in display #11

Open
chris-wei-17 opened this issue Aug 10, 2020 · 3 comments
Open

estimated motion not presenting in display #11

chris-wei-17 opened this issue Aug 10, 2020 · 3 comments

Comments

@chris-wei-17
Copy link

chris-wei-17 commented Aug 10, 2020

I am working on a project focused on feature selection but would like to use your stack as a foundation as it appears to be very robust for VO. My application is an RC size car using a ZED camera. I have made modifications to adapt your software to work with my platform. An issue I am having is that the estimated motion red line does not "draw" when the car moves ( i am not using KITTI set or ground truth ). My scale is very small, ~.001 therefore the frame_pose changes in only very small values, which I think is why there is no noticeable change in the position of the red dot in the display. Would the small scale value be related to the fact that the motion of an RC car is much smaller than a full size car in the KITTI data? Additionally, in points3D, each line follows the format - #.#####, - #.#####, #.#####; I am assuming the format is X,Y,Z in 3D space however it is unclear where the origin of this space is (left camera frame, baseline center, etc..). The Z distance appears to be forward positive, I have not evaluated accuracy, but X and Y are always negative. Since the features are distributed across the image frame, I would have expected a mostly even split of 'XY' being ++, +-, -+, -- for the four frame quadrants. I'm not sure if this would also contribute to the scale I am seeing. Do you have any thoughts on what may be causing these issues? I plan to cite your work in my paper and will be glad to share the results with you when it is complete. Thank you

EDIT :: I should add that I am using the master branch

@ZhenghaoFei
Copy link
Owner

Hi chris-wei-17,

The plotting feature in the current code is very primitive it might not fit your needs. I would recommend you save the data and plot using a tool that you familiar with (python/Matlab) so that you don't have to worry about the pixel scale on the display.

The origin of the space is the initial left_camera center, you are right, Z is the forwarding direction. https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

Before directly relying on this code's result on a new dataset. I would strongly recommend you to "evaluate" it first, at least roughly (such as check the scale and rough shape of the trajectory is correct). This will 1) help you to make sure your camera parameters are correct. 2) tune for the best parameters, such as feature bucket and number of features per bucket. You are using it on a UAV, I suppose you have a short baseline and fast feature moving speed (indoor). That would be more challenging than KITTI data.

Good luck!

@chris-wei-17
Copy link
Author

chris-wei-17 commented Aug 11, 2020

Hi ZhengaoFei,

I agree, I have been attempting to validate the parameters being used. I suspect camera parameters may be incorrect as they are significantly different from your zed.yaml, however I am getting them through the ZED API, so they should be correct. Since the camera auto-calibrates, I have enabled them as "real-time" variables that are retrieved every time the program runs instead of a static calibration file.

You are correct, the scenario is tricky. Could you tell me which part of the program produces the camera video showing the feature tracks shared in the README? This link shows the output of "displayTracking" that I get when simply rotating the robot. https://www.youtube.com/watch?v=tofjfaw8fIA. It almost seems like I somehow am using the same set of points for t0 and t1 as they appear to be overlaid and don't look like they track for even a few frames. I guess this could also be a projection issue. The ZED is running at HD720p with the L/R images being scaled to 1/2 size before being processed for Odometry, and runs between 10-30 fps depending on bucketing settings.

If you do not mind, please leave this issue open. I would like to share the solution for future users. The ZED seems to be a popular option for stereo camera, but open source use of the hardware outside of their high level functions does not have much support

Thanks!

@ZhenghaoFei
Copy link
Owner

Hi Chris,
As for the calibration:
The zed calibration file in the source code was specific to a lower resolution (might be 720P, I forget), make sure that the projection matrix you use matches your resolution, if you scale the image, the projection matrix will also change. You should always use the one published from the ZED camera. I am not familiar with ZED API, but you can try to grab the calibration info from ZED ROS, topic: **/camera_info.

As for the feature checking:
I saw your video. yes, it looks strange, and seems the features are not tracking at all.
The display code is in https://github.com/ZhenghaoFei/visual_odom/blob/master/src/visualOdometry.cpp#L194
I would suggest you first check the feature matching (left to right, t0 to t1)

Best,
Zhenghao

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants