Skip to content
echown edited this page Jul 20, 2011 · 2 revisions

The Ball is in many ways the most antiquated part of our vision system. It is a remnant of the days when we did everything by run-length-encoding and it still has a lot of code left from the Aibo days. It can be found in Ball.cpp. One project in 2011/12 would be to rewrite the Ball code from scratch. In principle the algorithm for finding balls is simple. We scan down from the field horizon looking for runs of orange color. We collect those in a data structure. When we are done we perform run-length encoding to build blobs of orange. These blobs represent candidate balls. We then sort these by size. We look through the balls starting with the biggest blob and stop when either we find an acceptable ball or run out of candidates.

Essentially with each candidate we run them through a large series of sanity checks:

  • Is the blob big enough?
  • Is it orange enough?
  • Is it square enough (the blob that is)?
  • Is it round enough?
  • Does the distance estimate based on size match the one from pose?
  • Is there green around? Or pink (which could indicate a uniform)?

We are in the midst of a change in all of this. Moving from a run-length encoding system to one based on edge detection. Further, we may move to a system that is completely different and more akin to how we find goal posts. The idea would be to scan horizontally from the bottom of the screen. If we find an orange swatch then immediately check if it is part of a ball (using edge detection to find its outline). The beauty of this system is that once the ball is found then we needn't be so careful about the rest of our scanning on the field.

Clone this wiki locally