Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Embedded Robotics (Thomas Braunl, 2 ed, 2006)

.pdf
Скачиваний:
251
Добавлен:
12.08.2013
Размер:
5.37 Mб
Скачать

Image Processing

18.5 Image Processing

Vision is the most important ability of a human soccer player. In a similar way, vision is the centerpiece of a robot soccer program. We continuously analyze the visual input from the on-board digital color camera in order to detect objects on the soccer field. We use color-based object detection since it is computationally much easier than shape-based object detection and the robot soccer rules define distinct colors for the ball and goals. These color hues are taught to the robot before the game is started.

The lines of the input image are continuously searched for areas with a mean color value within a specified range of the previously trained hue value and of the desired size. This is to try to distinguish the object (ball) from an area similar in color but different in shape (yellow goal). In Figure 18.4 a simplified line of pixels is shown; object pixels of matching color are displayed in gray, others in white. The algorithm initially searches for matching pixels at either end of a line (see region (a): first = 0, last = 18), then the mean color value is calculated. If it is within a threshold of the specified color hue, the object has been found. Otherwise the region will be narrowed down, attempting to find a better match (see region (b): first = 4, last = 18). The algorithm stops as soon as the size of the analyzed region becomes smaller than the desired size of the object. In the line displayed in Figure 18.4, an object with a size of 15 pixels is found after two iterations.

0

1

2

3

4

5

6

7

8

9

10 11 12 13 14 15 16 17 18

19

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(a)

(b)

Distance estimation

Figure 18.4: Analyzing a color image line

Once the object has been identified in an image, the next step is to translate local image coordinates (x and y, in pixels) into global world coordinates (x´ and y´ in m) in the robot’s environment. This is done in two steps:

Firstly, the position of the ball as seen from the robot is calculated from the given pixel values, assuming a fixed camera position and orientation. This calculation depends on the height of the object in the image. The higher the position in the image, the further an object’s distance from the robot.

Secondly, given the robot’s current position and orientation, the local coordinates are transformed into global position and orientation on the field.

Since the camera is looking down at an angle, it is possible to determine the object distance from the image coordinates. In an experiment (see Figure 18.5), the relation between pixel coordinates and object distance in meters has been determined. Instead of using approximation functions, we decided to use the faster and also more accurate method of lookup tables. This allows us to

269

18 Robot Soccer

calculate the exact ball position in meters from the screen coordinates in pixels and the current camera position/orientation.

Measurement

distance (cm)

80

60

40

20

 

 

 

 

 

 

height (pixels)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

20

40

60

80

Schematic Diagram

Figure 18.5: Relation between object height and distance

The distance values were found through a series of measurements, for each camera position and for each image line. In order to reduce this effort, we only used three different camera positions (up, middle, down for the tilting camera arrangement, or left, middle, right for the panning camera arrangement), which resulted in three different lookup tables.

Depending on the robot’s current camera orientation, the appropriate table is used for distance translation. The resulting relative distances are then translated into global coordinates using polar coordinates.

An example output picture on the robot LCD can be seen in Figure 18.6. The lines indicate the position of the detected ball in the picture, while its global position on the field is displayed in centimeters on the right-hand side.

Figure 18.6: LCD output after ball detection

270

Trajectory Planning

This simple image analysis algorithm is very efficient and does not slow down the overall system too much. This is essential, since the same controller doing image processing also has to handle sensor readings, motor control, and timer interrupts as well. We achieve a frame rate of 3.3 fps for detecting the ball when no ball is in the image and of 4.2 fps when the ball has been detected in the previous frame, by using coherence. The use of a FIFO buffer for reading images from the camera (not used here) can significantly increase the frame rate.

18.6 Trajectory Planning

Once the ball position has been determined, the robot executes an approach behavior, which should drive it into a position to kick the ball forward or even into the opponent’s goal. For this, a trajectory has to be generated. The robot knows its own position and orientation by dead reckoning; the ball position has been determined either by the robot’s local search behavior or by communicating with other robots in its team.

18.6.1 Driving Straight and Circle Arcs

The start position and orientation of this trajectory is given by the robot’s current position, the end position is the ball position, and the end orientation is the line between the ball and the opponent’s goal. A convenient way to generate a smooth trajectory for given start and end points with orientations are Hermite splines. However, since the robot might have to drive around the ball in order to kick it toward the opponent’s goal, we use a case distinction to add “viapoints” in the trajectory (see Figure 18.7). These trajectory points guide the robot around the ball, letting it pass not too close, but maintaining a smooth trajectory.

If robot drove directly to ball:

 

Would it kick ball towards

 

yes

own goal?

 

 

no

 

 

 

 

 

 

If robot drove directly to ball:

 

 

 

Would it kick ball directly

 

 

 

into opponent goal?

 

 

 

yes

 

 

no

 

 

 

Is robot in

 

 

 

own half?

 

 

 

yes

no

drive behind

drive directly

drive directly

 

drive behind

the ball

to the ball

to the ball

 

the ball

 

 

 

 

 

 

Figure 18.7: Ball approach strategy

271

18 Robot Soccer

In this algorithm, driving directly means to approach the ball without viapoints on the path of the robot. If such a trajectory is not possible (for example for the ball lying between the robot and its own goal), the algorithm inserts a via-point in order to avoid an own goal. This makes the robot pass the ball on a specified side before approaching it. If the robot is in its own half, it is sufficient to drive to the ball and kick it toward the other team's half. When a player is already in the opposing team's half, however, it is necessary to approach the ball with the correct heading in order to kick it directly toward the opponent’s goal.

y

(b)(c)

(a)

x

(e)

(d)

Figure 18.8: Ball approach cases

The different driving actions are displayed in Figure 18.8. The robot drives either directly to the ball (Figure 18.8 a, c, e) or onto a curve (either linear and circular segments or a spline curve) including via-points to approach the ball from the correct side (Figure 18.8 b, d).

Drive directly to the ball (Figure 18.8 a, b):

With localx and localy being the local coordinates of the ball seen from the robot, the angle to reach the ball can be set directly as:

D

atan

§localy·

©localx---------------¹

 

 

With l being the distance between the robot and the ball, the distance to drive in a curve is given by:

d l D sin D

Drive around the ball (Figure 18.8 c, d, e):

If a robot is looking toward the ball but at the same time facing its own goal, it can drive along a circular path with a fixed radius that goes through the ball. The radius of this circle is chosen arbitrarily and was defined to be 5cm. The circle is placed in such a way that the tangent at the position of the ball also goes through the opponent’s goal. The robot turns on the spot until it faces this

272

Trajectory Planning

circle, drives to it in a straight line, and drives behind the ball on the circular

path (Figure 18.9).

 

 

Compute turning angle Jfor turning on the spot:

J D E

Circle angle Ebetween new robot heading and ball:

E E E

Angle to be driven on circular path:

E

Angle Egoal heading from ball to x-axis:

 

E

 

atan

§

bally

·

 

 

 

--------------------------------

 

 

1

 

©length ballx¹

 

Angle E2: ball heading from robot to x-axis:

E2

atan

§bally

roboty·

©ball---------------------------------x

robotx¹

Angle Dfrom robot orientation to ball heading (Mis robot orientation):

D M E

x

roboty

 

 

 

 

bally

 

 

 

 

E

E

length-

length-

 

 

 

ballx

robotx

 

 

E

 

 

 

 

 

D

 

Figure 18.9: Calculating a circular path toward the ball

18.6.2 Driving Spline Curves

The simplest driving trajectory is to combine linear segments with circle arc segments. An interesting alternative is the use of splines. They can generate a smooth path and avoid turning on the spot, therefore they will generate a faster path.

273

18 Robot Soccer

Given the robot position Pk and its heading DPk as well as the ball position Pk+1 and the robot’s destination heading DPk+1 (facing toward the opponent’s goal from the current ball position), it is possible to calculate a spline which for every fraction u of the way from the current robot position to the ball position describes the desired location of the robot.

The Hermite blending functions H0 .. H3 with parameter u are defined as follows:

H0

2u3 3u2 1

H1

2u3 3u2

H2

u3

3u2 u

H3

u3

u2

The current robot position is then defined by:

P u pkH0 u pk 1H1 u DpkH2 u DPk 1H3 u

Figure 18.10: Spline driving simulation

A PID controller is used to calculate the linear and rotational speed of the robot at every point of its way to the ball, trying to get it as close to the spline curve as possible. The robot’s speed is constantly updated by a background process that is invoked 100 times per second. If the ball can no longer be detected (for example if the robot had to drive around it and lost it out of sight), the robot keeps driving to the end of the original curve. An updated driving command is issued as soon as the search behavior recognizes the (moving) ball at a different global position.

This strategy was first designed and tested on the EyeSim simulator (see Figure 18.10), before running on the actual robot. Since the spline trajectory

274

Trajectory Planning

computation is rather time consuming, this method has been substituted by simpler drive-and-turn algorithms when participating in robot soccer tournaments.

18.6.3 Ball Kicking

After a player has successfully captured the ball, it can dribble or kick it toward the opponent’s goal. Once a position close enough to the opponent’s goal has been reached or the goal is detected by the vision system, the robot activates its kicker to shoot the ball into the goal.

The driving algorithm for the goal keeper is rather simple. The robot is started at a position of about 10cm in front of the goal. As soon as the ball is detected, it drives between the ball and goal on a circular path within the defense area. The robot follows the movement of the ball by tilting its camera up and down. If the robot reaches the corner of its goal, it remains on its position and turns on the spot to keep track of the ball. If the ball is not seen in a pre-defined number of images, the robot suspects that the ball has changed position and therefore drives back to the middle of the goal to restart its search for the ball.

Figure 18.11: CIIPS Glory versus Lucky Star (1998)

If the ball is detected in a position very close to the goalie, the robot activates its kicker to shoot the ball away.

275

18 Robot Soccer

Fair play is obstacle avoidance

“Fair Play” has always been considered an important issue in human soccer. Therefore, the CIIPS Glory robot soccer team (Figure 18.11) has also stressed its importance. The robots constantly check for obstacles in their way, and – if this is the case – try to avoid hitting them. In case an obstacle has been touched, the robot drives backward for a certain distance until the obstacle is out of reach. If the robot has been dribbling the ball to the goal, it turns quickly toward the opponent’s goal to kick the ball away from the obstacle, which could be a wall or an opposing player.

18.7 References

ASADA, M. (Ed.) RoboCup-98: Robot Soccer World Cup II, Proceedings of the Second RoboCup Workshop, RoboCup Federation, Paris, July 1998

BALTES, J. AllBotz, in P. Stone, T. Balch, G. Kraetzschmar (Eds.), RoboCup2000: Robot Soccer World Cup IV, Springer-Verlag, Berlin, 2001a, pp. 515-518 (4)

BALTES, J. 4 Stooges, in P. Stone, T. Balch, G. Kraetzschmar (Eds.), RoboCup2000: Robot Soccer World Cup IV, Springer-Verlag, Berlin, 2001b, pp. 519-522 (4)

BRÄUNL, T. Research Relevance of Mobile Robot Competitions, IEEE Robotics and Automation Magazine, vol. 6, no. 4, Dec. 1999, pp. 32-37 (6)

BRÄUNL, T., GRAF, B. Autonomous Mobile Robots with Onboard Vision and Local Intelligence, Proceedings of Second IEEE Workshop on Perception for Mobile Agents, Fort Collins, Colorado, 1999

BRÄUNL, T., GRAF, B. Small robot agents with on-board vision and local intelligence, Advanced Robotics, vol. 14, no. 1, 2000, pp. 51-64 (14)

CHO, H., LEE, J.-J. (Eds.) Proceedings 2002 FIRA World Congress, Seoul, Korea, May 2002

FIRA, FIRA Official Website, Federation of International Robot-Soccer Association, http://www.fira.net/, 2006

KITANO, H., ASADA, M., KUNIYOSHI, Y., NODA, I., OSAWA, E. RoboCup: The Robot World Cup Initiative, Proceedings of the First International Conference on Autonomous Agents (Agent-97), Marina del Rey CA, 1997, pp. 340-347 (8)

KITANO, H., ASADA, M., NODA, I., MATSUBARA, H. RoboCup: Robot World

Cup, IEEE Robotics and Automation Magazine, vol. 5, no. 3, Sept. 1998, pp. 30-36 (7)

ROBOCUP FEDERATION, RoboCup Official Site, http://www.robocup.org, 2006

276

N. . .EURAL. . . . . . . . . . .N. . ETWORKS. . . . . . . . . . . . . . . . . . .

 

19

 

 

.. . . . . . . .

 

The artificial neural network (ANN), often simply called neural network (NN), is a processing model loosely derived from biological neurons [Gurney 2002]. Neural networks are often used for classification problems or decision making problems that do not have a simple or straightforward

algorithmic solution. The beauty of a neural network is its ability to learn an input to output mapping from a set of training cases without explicit programming, and then being able to generalize this mapping to cases not seen previously.

There is a large research community as well as numerous industrial users working on neural network principles and applications [Rumelhart, McClelland 1986], [Zaknich 2003]. In this chapter, we only briefly touch on this subject and concentrate on the topics relevant to mobile robots.

19.1 Neural Network Principles

A neural network is constructed from a number of individual units called neurons that are linked with each other via connections. Each individual neuron has a number of inputs, a processing node, and a single output, while each connection from one neuron to another is associated with a weight. Processing in a neural network takes place in parallel for all neurons. Each neuron constantly (in an endless loop) evaluates (reads) its inputs, calculates its local activation value according to a formula shown below, and produces (writes) an output value.

The activation function of a neuron a(I, W) is the weighted sum of its inputs, i.e. each input is multiplied by the associated weight and all these terms are added. The neuron’s output is determined by the output function o(I, W), for which numerous different models exist.

In the simplest case, just thresholding is used for the output function. For our purposes, however, we use the non-linear “sigmoid” output function defined in Figure 19.1 and shown in Figure 19.2, which has superior characteristics for learning (see Section 19.3). This sigmoid function approximates the

277277

19

Neural Networks

 

 

 

 

 

 

i1

w1

 

 

Activation

n

 

 

 

 

 

 

 

 

 

 

w2

 

 

a I W

¦ ik wk

 

i2

 

a

 

o

k 1

 

 

 

 

 

 

 

. . .

wn

 

 

Output

 

1

 

 

o I W

 

 

 

 

---------------------------------

 

in

 

 

 

 

1 e

U a I W

 

 

 

 

 

 

 

Figure 19.1: Individual artificial neuron

Heaviside step function, with parameter U controlling the slope of the graph (usually set to 1).

1

0.8

0.6

0.4

0.2

0

0

5

–5

Figure 19.2: Sigmoidal output function

19.2 Feed-Forward Networks

A neural net is constructed from a number of interconnected neurons, which are usually arranged in layers. The outputs of one layer of neurons are connected to the inputs of the following layer. The first layer of neurons is called the “input layer”, since its inputs are connected to external data, for example sensors to the outside world. The last layer of neurons is called the “output layer”, accordingly, since its outputs are the result of the total neural network and are made available to the outside. These could be connected, for example, to robot actuators or external decision units. All neuron layers between the input layer and the output layer are called “hidden layers”, since their actions cannot be observed directly from the outside.

If all connections go from the outputs of one layer to the input of the next layer, and there are no connections within the same layer or connections from a later layer back to an earlier layer, then this type of network is called a “feedforward network”. Feed-forward networks (Figure 19.3) are used for the sim-

278