Nao segmentation
Overview
We need to use some criteria to detect real-world objects. The easiest method to distinguish different objects is to use their colour. As many objects on the football field are colour-coded, it is reasonable to use it, as it is also computationally very effective. It means that we claasify every pixel into a colour class and after this is done, we form the blobs of the similar pixels.
Pixel classification
We use a lookup table to classify pixels into colour classes. We have a special program that let's us easily construct new lookup tables and this method has worked well enough. The lookup table format is described here on pages 15-16.
When using lookup tables, it is crucial to stop camera's from being too "smart". Things like white balance and auto-exposure might make the picture look more natural, but it will also make lookup tables harder to use. That's why we turned them off with ALVideoDeviceProxy::setCameraParameter, specifically we set kCameraAutoWhiteBalanceID and kCameraAutoExpositionID to zero.
Blob formation
After we have classified pixels, blobs are formed from pixels of the same colour class. Blob formation is done using the CMVision algorithms with slight modifications to work better with our own data structures. The algorithms are described in detail here on pages 16-19.
End result
Applying those techiques on a frame gives us an array that consists of blob lists. In every blob list we have blobs that belong to the same colour class. They are also sorted by their area, so that it's easy to find the largest blobs of some specific colour.