Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Embedded Robotics (Thomas Braunl, 2 ed, 2006)

.pdf
Скачиваний:
251
Добавлен:
12.08.2013
Размер:
5.37 Mб
Скачать

Robot Experiments

Figure 16.6: Experiment 1 photograph, measured map, and generated map

Figure 16.7: Experiment 2 photograph, measured map, and generated map

237

16 Map Generation

Figure 16.8: Experiment 3 photograph, measured map, and generated map

The main source of error in our map generation algorithm can be traced back to the infrared sensors. These sensors only give reliable measurements within the range of 50 to 300 millimeters. However, individual readings can deviate by as much as 10 millimeters. Consequently, these errors must be considered and included in our simulation environment.

The second source of error is the robot positioning using dead reckoning. With every move of the robot, small inaccuracies due to friction and wheel slippage lead to errors in the perceived robot position and orientation. Although these errors are initially small, the accumulation of these inaccuracies can lead to a major problem in robot localization and path planning, which will directly affect the accuracy of the generated map.

For comparing the measured map with the generated map, a relation between the two maps must be found. One of the maps must be translated and rotated so that both maps share the same reference point and orientation. Next, an objective measure for map similarity has to be found. Although it might be possible to compare every single point of every single line of the maps to determine a similarity measure, we believe that such a method would be rather inefficient and not give satisfying results. Instead, we identify key points in the measured map and compare them with points in the generated map. These key points are naturally corners and vertices in each map and can be identified as the end points of line segments. Care has to be taken to eliminate points from the correct map which lie in regions which the robot cannot sense or reach, since processing them would result in an incorrectly low matching level of the generated map. Corner points are matched between the two maps using the smallest Euclidean distance, before their deviation can be measured.

238

Results

16.7 Results

Table 16.1 summarizes the map accuracy generated for Figure 16.5. Errors are specified as median error per pixel and as median error relative to the robot’s size (approximately 170mm) to provide a more graphic measure.

 

Median Error

Median Error Relative

 

to Robot Size

 

 

 

 

 

 

 

 

Figure b (no error)

21.1mm

12.4%

 

 

 

Figure c (20% PSD)

29.5mm

17.4%

 

 

 

Figure d (10% PSD, 5% pos.)

28.4mm

16.7%

 

 

 

Table 16.1: Results of simulation

From these results we can observe that the mapping error stays below 17% of the robot’s size, which is satisfactory for our experiments. An interesting point to note is that the map error is still above 10% in an environment with perfect sensors. This is an inaccuracy of the EyeSim simulator, and mainly due to two factors:

We assume the robot is obtaining data readings continuously. However, this is not the case as the simulator obtains readings at time-discrete intervals. This may result in the robot simulation missing exact corner positions.

We assume the robot performs many of its functions simultaneously, such as moving and obtaining sensor readings. However, this is not the case as programs run sequentially and some time delay is inevitable between actions. This time delay is increased by the large number of computations that have to be performed by the simulator, such as its graphics output and the calculations of updated robot positions.

 

Median Error

Median Error Relative to

 

Robot Size

 

 

 

 

 

 

 

 

Experiment 1

39.4mm

23.2%

 

 

 

Experiment 2

33.7mm

19.8%

 

 

 

Experiment 3

46.0mm

27.1%

 

 

 

Table 16.2: Results of real robot

For the experiments in the physical environment (summarized in Table 16.2), higher error values were measured than obtained from the simulation.

239

16 Map Generation

However, this was to be expected since our simulation system cannot model all aspects of the physical environment perfectly. Nevertheless, the median error values are about 23% of the robot’s size, which is a good value for our implementation, taking into consideration limited sensor accuracy and a noisy environment.

16.8 References

BERNHARDT, R., ALBRIGHT, S. (Eds.) Robot Calibration, Chapman & Hall, London, 1993, pp. 19-29 (11)

BRÄUNL, T., STOLZ, H. Mobile Robot Simulation with Sonar Sensors and Cameras, Simulation, vol. 69, no. 5, Nov. 1997, pp. 277-282 (6)

BRÄUNL, T., TAY, N. Combining Configuration Space and Occupancy Grid for Robot Navigation, Industrial Robot International Journal, vol. 28, no. 3, 2001, pp. 233-241 (9)

CHATILA, R. Task and Path Planning for Mobile Robots, in A. Wong, A. Pugh (Eds.), Machine Intelligence and Knowledge Engineering for Robotic Applications, no. 33, Springer-Verlag, Berlin, 1987, pp. 299-330 (32)

DONALD, B. (Ed.) Error Detection and Recovery in Robotics, Springer-Verlag, Berlin, 1989, pp. 220-233 (14)

FUJIMURA, K. Motion Planning in Dynamic Environments, 1st Ed., SpringerVerlag, Tokyo, 1991, pp. 1-6 (6), pp. 10-17 (8), pp. 127-142 (16)

HONDERD, G., JONGKIND, W., VAN AALST, C. Sensor and Navigation System for a Mobile Robot, in L. Hertzberger, F. Groen (Eds.), Intelligent Autonomous Systems, Elsevier Science Publishers, Amsterdam, 1986, pp. 258-264 (7)

KAMON, I., RIVLIN, E. Sensory-Based Motion Planning with Global Proofs, IEEE Transactions on Robotics and Automation, vol. 13, no. 6, 1997, pp. 814-821 (8)

KAMPMANN, P. Multilevel Motion Planning for Mobile Robots Based on a Topologically Structured World Model, in T. Kanade, F. Groen, L. Hertzberger (Eds.), Intelligent Autonomous Systems 2, Stichting International Congress of Intelligent Autonomous Systems, Amsterdam, 1990, pp. 241-252 (12)

LEONARD, J., DURRANT-WHITE, H. Directed Sonar Sensing for Mobile Robot Navigation, Rev. Ed., Kluwer Academic Publishers, Boston MA, 1992, pp. 101-111 (11), pp. 127-128 (2), p. 137 (1)

LOZANO-PÉREZ, T. Task Planning, in M. Brady, J. HollerBach, T. Johnson, T. Lozano-Pérez, M. Mason (Eds.), Robot Motion Planning and Control, MIT Press, Cambridge MA, 1982, pp. 473-498 (26)

240

References

MELIN, C. Exploration of Unknown Environments by a Mobile Robot, in T. Kanade, F. Groen, L. Hertzberger (Eds.), Intelligent Autonomous Systems 2, Stichting International Congress of Intelligent Autonomous Systems, Amsterdam, 1990, pp. 715-725 (11)

PIAGGIO, M., ZACCARIA, R. Learning Navigation Situations Using Roadmaps, IEEE Transactions on Robotics and Automation, vol. 13, no. 6, 1997, pp. 22-27 (6)

SERRADILLA, F., KUMPEL, D. Robot Navigation in a Partially Known Factory Avoiding Unexpected Obstacles, in T. Kanade, F. Groen, L. Hertzberger (Eds.), Intelligent Autonomous Systems 2, Stichting International Congress of Intelligent Autonomous Systems, Amsterdam, 1990, pp. 972-980 (9)

SHEU, P., XUE, Q. (Eds.) Intelligent Robotic Planning Systems, World Scientific Publishing, Singapore, 1993, pp. 111-127 (17), pp. 231-243 (13)

241

REAL-TIME IMAGE

 

17

PROCESSING

 

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

 

 

.. . . . . . . .

 

Every digital consumer camera today can read images from a sensor chip and (optionally) display them in some form on a screen. However, what we want to do is implement an embedded vision system, so read-

ing and maybe displaying image data is only the necessary first step. We want to extract information from an image in order to steer a robot, for example following a colored object. Since both the robot and the object may be moving, we have to be fast. Ideally, we want to achieve a frame rate of 10 fps (frames per second) for the whole perception–action cycle. Of course, given the limited processing power of an embedded controller, this restricts us in the choice of both the image resolution and the complexity of the image processing operations.

In the following, we will look at some basic image processing routines. These will later be reused for more complex robot applications programs, like robot soccer in Chapter 18.

For further reading in robot vision see [Klette, Peleg, Sommer 2001] and [Blake, Yuille 1992]. For general image processing textbooks see [Parker 1997], [Gonzales, Woods 2002], [Nalwa 1993], and [Faugeras 1993]. A good practical introduction is [Bässmann, Besslich 1995].

17.1 Camera Interface

Since camera chip development advances so rapidly, we have already had five camera chip generations interfaced to the EyeBot and have implemented the corresponding camera drivers. For a robot application program, however, this is completely transparent. The routines to access image data are:

CAMInit(NORMAL)

Initializes camera, independent of model. Older camera models supported modes different to “normal”.

243243

17 Real-Time Image Processing

CAMRelease()

Disable camera.

CAMGetFrame (image *buffer)

Read a single grayscale image from the camera, save in buffer.

CAMGetColFrame (colimage *buffer, int convert)

Read a single color image from the camera. If “convert” equals 1, the image is immediately converted to 8bit grayscale.

int CAMSet (int para1, int para2, int para3)

Set camera parameters. Parameter meaning depends on camera model (see Appendix B.5.4).

int CAMGet (int *para1, int *para2 ,int *para3)

Get camera parameters. Parameter meaning depends on camera model (see Appendix B.5.4).

The first important step when using a camera is setting its focus. The EyeCam C2 cameras have an analog grayscale video output, which can be directly plugged into a video monitor to view the camera image. The lens has to be focussed for the desired object distance.

Sieman’s star Focussing is also possible with the EyeBot’s black and white display only. For this purpose we place a focus pattern like the so-called “Sieman’s star” in Figure 17.1 at the desired distance in front of the camera and then change the focus until the image appears sharp.

Figure 17.1: Focus pattern

244

Auto-Brightness

17.2 Auto-Brightness

The auto-brightness function adapts a cameras’s aperture to the continuously changing brightness conditions. If a sensor chip does support aperture settings but does not have an auto-brightness feature, then it can be implemented in software. The first idea for implementing the auto-brightness feature for a grayscale image is as follows:

1. Compute the average of all gray values in the image. 2.a If the average is below threshold no. 1: open aperture. 2.b If the average is above threshold no. 2: close aperture.

Figure 17.2: Auto-brightness using only main diagonal

So far, so good, but considering that computing the average over all pixels is quite time consuming, the routine can be improved. Assuming that to a certain degree the gray values are evenly distributed among the image, using just a cross-section of the whole image, for example the main diagonal (Figure 17.2), should be sufficient.

Program 17.1: Auto-brightness

1

typedef BYTE

image

[imagerows][imagecolumns];

2

typedef BYTE

colimage[imagerows][imagecolumns][3];

1

#define

THRES_LO

70

2

#define

THRES_HI 140

3

 

 

 

 

4void autobrightness(image orig)

5{ int i,j, brightness = 100, avg =0;

6for (i=0; i<imagerows; i++) avg += orig[i][i];

7avg = avg/imagerows;

8

9if (avg<THRES_LO)

10{ brightness = MIN(brightness * 1.05, 200);

11CAMSet(brightness, 100, 100)

12}

13else if (avg>THRES_HI)

14{ brightness = MAX(brightness / 1.05, 50);

15CAMSet(brightness, 100, 100)

16}

17}

245

17 Real-Time Image Processing

Program 17.1 shows the pre-defined data types for grayscale images and color images and the implementation for auto-brightness, assuming that the number of rows is less than or equal to the number of columns in an image (in this implementation: 60 and 80). The CAMSet routine adjusts the brightness setting of the camera to the new calculated value, the two other parameters (here: offset and contrast) are left unchanged. This routine can now be called in regular intervals (for example once every second, or for every 10th image, or even for every image) to update the camera’s brightness setting. Note that this program only works for the QuickCam, which allows aperture settings, but does not have auto-brightness.

17.3 Edge Detection

One of the most fundamental image processing operations is edge detection. Numerous algorithms have been introduced and are being used in industrial applications; however, for our purposes very basic operators are sufficient. We will present here the Laplace and Sobel edge detectors, two very common and simple edge operators.

The Laplace operator produces a local derivative of a grayscale image by taking four times a pixel value and subtracting its left, right, top, and bottom neighbors (Figure 17.3). This is done for every pixel in the whole image.

 

–1

 

 

 

 

–1

4

–1

 

–1

 

 

 

 

Figure 17.3: Laplace operator

The coding is shown in Program 17.2 with a single loop running over all pixels. There are no pixels beyond the outer border of an image and we need to avoid an access error by accessing array elements outside defined bounds. Therefore, the loop starts at the second row and stops at the last but one row. If required, these two rows could be set to zero. The program also limits the maximum value to white (255), so that any result value remains within the byte data type.

The Sobel operator that is often used for robotics applications is only slightly more complex [Bräunl 2001].

In Figure 17.4 we see the two filter operations the Sobel filter is made of. The Sobel-x only finds discontinuities in the x-direction (vertical lines), while Sobel-y only finds discontinuities in the y-direction (horizontal lines). Combining these two filters is done by the formulas shown in Figure 17.4, right, which give the edge strength (depending on how large the discontinuity is) as well as the edge direction (for example a dark-to-bright transition at 45° from the x-axis).

246

Edge Detection

Program 17.2: Laplace edge operator

1void Laplace(BYTE * imageIn, BYTE * imageOut)

2{ int i, delta;

3for (i = width; i < (height-1)*width; i++)

4{ delta = abs(4 * imageIn[i]

5

-imageIn[i-1]

-imageIn[i+1]

6

-imageIn[i-width] -imageIn[i+width]);

7if (delta > white) imageOut[i] = white;

8else imageOut[i] = (BYTE)delta;

9}

10}

–1

 

1

 

1

2

1

b

dx2 dy2

 

dy

–2

 

2

 

 

 

 

r

 

 

 

 

 

 

 

atan -----

 

 

 

 

 

 

 

 

| |dx| + |dy|

 

dx

–1

 

1

 

–1

–2

–1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 17.4: Sobel-x and Sobel-y masks, formulas for strength and angle

For now, we are only interested in the edge strength, and we also want to avoid time consuming functions such as square root and any trigonometric functions. We therefore approximate the square root of the sum of the squares by the sum of the absolute values of dx and dy.

Program 17.3: Sobel edge operator

1void Sobel(BYTE *imageIn, BYTE *imageOut)

2{ int i, grad, delaX, deltaY;

3

4memset(imageOut, 0, width); /* clear first row */

5for (i = width; i < (height-1)*width; i++)

6{ deltaX = 2*imageIn[i+1] + imageIn[i-width+1]

7

 

+ imageIn[i+width+1] - 2*imageIn[i-1]

8

 

- imageIn[i-width-1] - imageIn[i+width-1];

9

deltaY =

imageIn[i-width-1] + 2*imageIn[i-width]

10

11

 

+ imageIn[i-width+1] - imageIn[i+width-1]

12

 

- 2*imageIn[i+width] - imageIn[i+width+1];

13

 

 

14grad = (abs(deltaX) + abs(deltaY)) / 3;

15if (grad > white) grad = white;

16imageOut[i] = (BYTE)grad;

17}

18memset(imageOut + i, 0, width); /* clear last line */

19}

247