Nao line detection: Difference between revisions

From Intelligent Materials and Systems Lab

 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Overview ==
== Overview ==
We need to detect the white lines on the field. The main reason is that they help the robot to localize itself on the field. Line detection, however. is computationally a quite complex task, so we have tried a few different approaches. There are implementations of [http://en.wikipedia.org/wiki/Hough_transform Hough Transform] in the OpenCV library, but they tend to be a bit too general-purpose and slow for our needs. We have also tried [http://en.wikipedia.org/wiki/RANSAC RANSAC], but it become too slow when the number of line points increased. Now we are using Randomized Hough Transform that we implemented by the description of <i>L. Xu, E. Oja, P. Kultanen "A new curve detection method: Randomized Hough transform"</i>
We need to detect the white lines on the field. The main reason is that they help the robot to localize itself on the field. Line detection, however. is computationally a quite complex task, so we have tried a few different approaches. There are implementations of [http://en.wikipedia.org/wiki/Hough_transform Hough Transform] in the OpenCV library, but they tend to be a bit too general-purpose and slow for our needs. We have also tried [http://en.wikipedia.org/wiki/RANSAC RANSAC], but it become too slow when the number of line points increased. We knew beforehand that it was computationally complex, but though that be could effectively limit the number of points given to it as input. In practice it turned out to be quite tricky, especially near the central circle line. Now we are using Randomized Hough Transform that we implemented by the description of <i>L. Xu, E. Oja, P. Kultanen "A new curve detection method: Randomized Hough transform"</i>


== Our implementation ==
== Our implementation ==
The line detection algorithm takes a vector of point coordinates as input. We produce these points during the segmentation process in the following way. First we scan every n-th line and every n-th row (n is something like 5, 10 or 20) for places where we have a green and a white pixel next to each other. We only scan some of the image, because we actually don't need many points to form the lines. It is actually kind of a compromise between computer performance and detection accuracy.<br><br>
First the line detection algorithm shuffles the points vector, so we get a uniform distribution. After that the algorithm starts taking points out of it by looping through the vector. That way we get random line points, because the distribution is uniform. We find the line equation parameters of every line formed by two points and add them to a list if they are unique, increase it's score by one if not unique. If we find a pair of line parameters that has a score of 3, we remove all the points from the vector that lie close to that line. If the number of such points is big enough, it is considered to be a new line. After that the algorithm proceeds until no line is found for last few cycles.<br><br>
The algorithm also calculates line starting and ending points. It is done by hashing all points to a small array (size 40 currently) and then choosing the longest consistent sequence. It's starting and ending points are the starting and ending points of that hashed on the array's consistent sequence.

Latest revision as of 19:13, 22 May 2013

Overview

We need to detect the white lines on the field. The main reason is that they help the robot to localize itself on the field. Line detection, however. is computationally a quite complex task, so we have tried a few different approaches. There are implementations of Hough Transform in the OpenCV library, but they tend to be a bit too general-purpose and slow for our needs. We have also tried RANSAC, but it become too slow when the number of line points increased. We knew beforehand that it was computationally complex, but though that be could effectively limit the number of points given to it as input. In practice it turned out to be quite tricky, especially near the central circle line. Now we are using Randomized Hough Transform that we implemented by the description of L. Xu, E. Oja, P. Kultanen "A new curve detection method: Randomized Hough transform"

Our implementation

The line detection algorithm takes a vector of point coordinates as input. We produce these points during the segmentation process in the following way. First we scan every n-th line and every n-th row (n is something like 5, 10 or 20) for places where we have a green and a white pixel next to each other. We only scan some of the image, because we actually don't need many points to form the lines. It is actually kind of a compromise between computer performance and detection accuracy.

First the line detection algorithm shuffles the points vector, so we get a uniform distribution. After that the algorithm starts taking points out of it by looping through the vector. That way we get random line points, because the distribution is uniform. We find the line equation parameters of every line formed by two points and add them to a list if they are unique, increase it's score by one if not unique. If we find a pair of line parameters that has a score of 3, we remove all the points from the vector that lie close to that line. If the number of such points is big enough, it is considered to be a new line. After that the algorithm proceeds until no line is found for last few cycles.

The algorithm also calculates line starting and ending points. It is done by hashing all points to a small array (size 40 currently) and then choosing the longest consistent sequence. It's starting and ending points are the starting and ending points of that hashed on the array's consistent sequence.