You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 12, 2024. It is now read-only.
This is a stub issue for the cone detection feature. Feel free to add details.
Cone position tracking is a key aspect of our stack. By knowing the relative position of each cone to the kart, we can plot the cone position in a global map and use this to generate track boundaries. We will see the same cone over multiple frames as the kart moves, and each observation will have slight individual noise. To figure out the true cone position from a smattering of observations, filters like EKF or simple averaging can be considered.
There are two main sources of raw data for sensing cones: LiDAR pointclouds and camera images. There is a Beijing FSAE team that has released their , which seems promising as a base for our first prototypes. For camera images, a promising open source YOLO model (thanks hayagreev). The final goal will likely be fusing measurements from both sensors.
Sub-tasks
As an early prototype, simply getting a stock algorithm to run on the stack is sufficient
Then we can think about integration
Feel free to add steps as they come up.
Resources
Check links above...
The text was updated successfully, but these errors were encountered:
Description
This is a stub issue for the cone detection feature. Feel free to add details.
Cone position tracking is a key aspect of our stack. By knowing the relative position of each cone to the kart, we can plot the cone position in a global map and use this to generate track boundaries. We will see the same cone over multiple frames as the kart moves, and each observation will have slight individual noise. To figure out the true cone position from a smattering of observations, filters like EKF or simple averaging can be considered.
There are two main sources of raw data for sensing cones: LiDAR pointclouds and camera images. There is a Beijing FSAE team that has released their , which seems promising as a base for our first prototypes. For camera images, a promising open source YOLO model (thanks hayagreev). The final goal will likely be fusing measurements from both sensors.
Sub-tasks
Resources
The text was updated successfully, but these errors were encountered: