Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Embedded Robotics (Thomas Braunl, 2 ed, 2006)

.pdf
Скачиваний:
251
Добавлен:
12.08.2013
Размер:
5.37 Mб
Скачать

Hardware Specification

Port

Pins

Motors

DC motor and encoder connectors (2 times 10 pin)

 

Motors are mapped to TPU channels 0..1

 

Encoders are mapped to TPU channels 2..5

 

Note: Pins are labeled in the following way:

 

 

| 1 | 3 | 5 | 7 | 9 |

 

 

---------------------

 

 

| 2 | 4 | 6 | 8 | 10|

 

1 Motor +

 

2

Vcc (unregulated)

 

3

Encoder channel A

 

4

Encoder channel B

 

5 GND

 

6 Motor –

 

7

--

 

8

--

 

9

--

 

10 --

 

 

Servos

Servo connectors (12 times 3 pin)

 

Servo signals are mapped to TPU channels 2..13

 

Note: If both DC motors are used, TPU 0..5 are already in use,

 

 

so Servo connectors Servo1 (TPU2) .. Servo4 (TPU5)

 

 

cannot be used.

 

1

Signal

 

2

Vcc (unregulated)

 

3

GND

Table D.8: Pinouts EyeCon Mark 5 (continued)

435

D Hardware Specification

Port

 

 

Pins

 

 

 

 

Infrared

Infrared connectors (6 times 4 pin)

 

Sensor outputs are mapped to digital input 0..3

 

1 GND

 

2

Vin (pulse)

 

3

Vcc (5V regulated)

 

4

Sensor output (digital)

 

 

Analog

Analog input connector (10 pin)

 

Microphone, mapped to analog input 0

 

Battery-level gauge, mapped to analog input 1

 

1 Vcc (5V regulated)

 

2 Vcc (5V regulated)

 

3 analog input 2

 

4 analog input 3

 

5 analog input 4

 

6 analog input 5

 

7 analog input 6

 

8 analog input 7

 

9 analog GND

 

10 analog GND

 

 

Digital

Digital input/output connector (16 pin)

 

[Infrared PSDs use digital output 0 and digital input 0..3]

 

1- 8

digital output 0..7

 

9-12

digital input 4..7

 

13-14 Vcc (5V)

 

15-16 GND

 

 

Table D.8: Pinouts EyeCon Mark 5 (continued)

436

LABORATORIES

 

E

..........

 

Lab 1 Controller

The first lab uses the controller only and not the robot

EXPERIMENT 1 Etch-a-Sketch

Write a program that implements the “Etch-a-Sketch” children’s game.

Use the four buttons in a consistent way for moving the drawing pen left/right and up/down. Do not erase previous dots, so pressing the buttons will leave a visible trail on the screen.

EXPERIMENT 2 Reaction Test Game

Write a program that implements the reaction game as given by the flow diagram.

To compute a random waittime value, isolate the last digit of the current time using

OSGetCount() and transform it into a value for OSWait() to wait between 1 and 8 seconds.

START

use last hex-digit of OS count as random number

wait for random time interval

 

 

 

YES

is button pressed ?

 

 

print “cheated!”

 

 

NO

 

 

 

 

 

 

 

 

print message “press button” get current sys. timer (a)

wait for key press

get current sys.timer (b)

print reaction time b–a in decimal form

STOP

437437

E Laboratories

EXPERIMENT 3 Analog Input and Graphics Output

Write a program to plot the amplitude of an analog signal. For this experiment, the analog source will be the microphone. For input, use the following function:

AUCaptureMic(0)

It returns the current microphone intensity value as an integer between 0 and 1,023.

Plot the analog signal versus time on the graphics LCD. The dimension of the LCD is 64 rows by 128 columns. For plotting use the functions:

LCDSetPixel(row,col,1)

Maintain an array of the most recent 128 data values and start plotting data values from the leftmost column (0). When the rightmost column is reached (127), continue at the leftmost column (0) – but be sure to remove the column’s old pixel before you plot the new value. This will result in an oscillo- scope-like output.

0,0

current value

 

63,127

Lab 2 Simple Driving

Driving a robot using motors and shaft encoders

EXPERIMENT 4 Drive a Fixed Distance and Return

Write a robot program using VWDriveStraight and VWDriveTurn to let the robot drive 40cm straight, then turn 180°, drive back and turn again, so it is back in its starting position and orientation.

EXPERIMENT 5 Drive in a Square

Similar to experiment 4.

EXPERIMENT 6 Drive in a Circle

Use routine VWDriveCurve to drive in a circle.

438

Driving Using Infrared Sensors

Lab 3 Driving Using Infrared Sensors

Combining sensor reading with driving routines

EXPERIMENT 7 Drive Straight toward an Obstacle and Return

This is a variation of an experiment from the previous lab. This time the task is to drive until the infrared sensors detect an obstacle, then turn around and drive back the same distance.

Lab 4 Using the Camera

Using camera and controller without the vehicle

EXPERIMENT 8 Motion Detection with Camera

By subtracting the pixel value of two subsequent grayscale images, motion can be detected. Use an algorithm to add up grayscale differences in three different image sections (left, middle, right). Then output the result by printing the word “left”, “middle”, or “right”.

Variation (a): Mark the detected motion spot graphically on the LCD.

Variation (b): Record audio files for speaking “left”, “middle”, “right” and have the EyeBot speak the result instead of print it.

EXPERIMENT 9 Motion Tracking

Detect motion like before. Then move the camera servo (and with it the camera) in the direction of movement. Make sure that you do not mistake the automotion of the camera for object motion.

Lab 5 Controlled Motion

Drive of the robot using motors and shaft encoders only

Due to manufacturing tolerances in the motors, the wheels of a the mobile robots will usually not turn at the same speed, when applying the same voltage. Therefore, a naive program for driving straight may lead in fact to a curve. In order to remedy this situation, the wheel encoders have to be read periodically and the wheel speeds have to be amended.

For the following experiments, use only the low-level routines MOTORDrive and QUADRead. Do not use any of the vZ routines, which contain a PID controller as part their implementation.

EXPERIMENT 10 PID Controller for Velocity Control of a Single Wheel

Start by implementing a P controller, then add I and D components. The wheel should rotate at a specified rotational velocity. Increasing the load on the wheel (e.g. by manually slowing it down) should result in an increased motor output to counterbalance the higher load.

439

E Laboratories

EXPERIMENT 11 PID Controller for Position Control of a Single Wheel

The previous experiment was only concerned with maintaining a certain rotational velocity of a single wheel. Now we want this wheel to start from rest, accelerate to the specified velocity, and finally brake to come to a standstill exactly at a specified distance (e.g. exactly 10 revolutions).

This experiment requires you to implement speed ramps. These are achieved by starting with a constant acceleration phase, then changing to a phase with (controlled) constant velocity, and finally changing to a phase with constant deceleration. The time points of change and the acceleration values have to be calculated and monitored during execution, to make sure the wheel stops at the correct position.

EXPERIMENT 12 Velocity Control of a Two-Wheeled Robot

Extend the previous PID controller for a single wheel to a PID controller for two wheels. There are two major objectives:

a.The robot should drive along a straight path.

b.The robot should maintain a constant speed.

You can try different approaches and decide which one is the best solution:

a.Implement two PID controllers, one for each wheel.

b.Implement one PID controller for forward velocity and one PID controller for rotational velocity (here: desired value is zero).

c.Implement only a single PID controller and use offset correction values for both wheels.

Compare the driving performance of your program with the built-in vZ routines.

EXPERIMENT 13 PID Controller for Driving in Curves

Extend the PID controller from the previous experiment to allow driving in general curves as well as straight lines.

Compare the driving performance of your program with the built-in vZ routines.

EXPERIMENT 14 Position Control of a Two-Wheeled Robot

Extend the PID controller from the previous experiment to enable position control as well as velocity control. Now it should be possible to specify a path (e.g. straight line or curve) plus a desired distance or angle and the robot should come to a standstill at the desired location after completing its path.

Compare the driving performance of your program with the built-in vZ routines.

440

Wall-Following

Lab 6 Wall-Following

This will be a useful subroutine for subsequent experiments

EXPERIMENT 15 Driving Along a Wall

Let the robot drive forward until it detects a wall to its left, right, or front. If the closest wall is to its left, it should drive along the wall facing its left-hand side and vice versa for right. If the nearest wall is in front, the robot can turn to either side and follow the wall.

The robot should drive in a constant distance of 15cm from the wall. That is, if the wall is straight, the robot would drive in a straight line at constant distance to the wall. If the wall is curved, the robot would drive in the same curve at the fixed distance to the wall.

Lab 7 Maze Navigation

Have a look at the Micro Mouse Contest. This is an international competition for robots navigating mazes.

EXPERIMENT 16 Exploring a Maze and Finding the Shortest Path

The robot has to explore and analyze an unknown maze consisting of squares of a known fixed size. An important sub-goal is to keep track of the robot’s position, measured in squares in the x- and y-direction from the starting position.

After searching the complete maze the robot is to return to its starting position. The user may now enter any square position in the maze and the robot has to drive to this location and back along the shortest possible path.

Lab 8 Navigation

Two of the classic and most challenging tasks for mobile robots

EXPERIMENT 17 Navigating a Known Environment

The previous lab dealt with a rather simple environment. All wall segments were straight, had the same length, and all angles were 90°. Now imagine the task of navigating a somewhat more general environment, e.g. the floor of a building.

Specify a map of the floor plan, e.g. in “world format” (see EyeSim simulator), and specify a desired path for the robot to drive in map coordinates. The robot has to use its on-board sensors to carry out self-localization and navigate through the environment using the provided map.

441

E Laboratories

EXPERIMENT 18 Mapping an Unknown Environment

One of the classic robot tasks is to explore an unknown environment and automatically generate a map. So, the robot is positioned at any point in its environment and starts exploration by driving around and mapping walls, obstacles, etc.

This is a very challenging task and greatly depends on the quality and complexity of the robot’s on-board sensors. Almost all commercial robots today use laser scanners, which return a near-perfect 2D distance scan from the robot’s location. Unfortunately, laser scanners are still several times larger, heavier, and more expensive than our robots, so we have to make do without them for now.

Our robots should make use of their wheel encoders and infrared PSD sensors for positioning and distance measurements. This can be augmented by image processing, especially for finding out when the robot has returned to its start position and has completed the mapping.

The derived map should be displayed on the robot’s LCD and also be provided as an upload to a PC.

442

Vision

Lab 9 Vision

EXPERIMENT 19 Follow the Light

Assume the robot driving area is enclosed by a boundary wall. The robot’s task is to find the brightest spot within a rectangular area, surrounded by walls. The robot should use its camera to search for the brightest spot and use its infrared sensors to avoid collisions with walls or obstacles.

Idea 1: Follow the wall at a fixed distance, then at the brightest spot turn and drive inside the area.

Idea 2: Let the robot turn a full circle (360°) and record the brightness levels for each angle. Then drive in the direction of the brightest spot.

EXPERIMENT 20 Line-Following

Mark a bright white line on a dark table, e.g. using masking tape. The robot’s task is to follow the line.

This experiment is somewhat more difficult than the previous one, since not just the general direction of brightness has to be determined, but the position (and maybe even curvature) of a bright line on a dark background has to be found. Furthermore, the driving commands have to be chosen according to the line’s curvature, in order to prevent the robot “losing the line”, i.e. the line drifting out of the robot’s field of view.

Special routines may be programmed for dealing with a “lost line” or for learning the maximum speed a robot can drive at for a given line curvature without losing the line.

Lab 10 Object Detection

EXPERIMENT 21 Object Detection by Shape

An object can be detected by its:

a.Shape

b.Color

c.Combination of shape and color

To make things easy at the beginning, we use objects of an easy-to-detect shape and color, e.g. a bright yellow tennis ball. A ball creates a simple circular image from all viewpoints, which makes it easy to detect its shape. Of course it is not that easy for more general objects: just imagine looking from different viewpoints at a coffee mug, a book, or a car.

443

E Laboratories

There are textbooks full of image processing and detection tasks. This is a very broad and active research area, so we are only getting an idea of what is possible.

An easy way of detecting shapes, e.g. distinguishing squares, rectangles, and circles in an image, is to calculate “moments”. First of all, you have to identify a continuous object from a pixel pattern in a binary (black and white) image. Then, you compute the object’s area and circumference. From the relationship between these two values you can distinguish several object categories such as circle, square, rectangle.

EXPERIMENT 22 Object Detection by Color

Another method for object detection is color recognition, as mentioned above. Here, the task is to detect a colored object from a background and possibly other objects (with different colors).

Color detection is simpler than shape detection in most cases, but it is not as straightforward as it seems. The bright yellow color of a tennis ball varies quite a bit over its circular image, because the reflection depends on the angle of the ball’s surface patch to the viewer. That is, the outer areas of the disk will be darker than the inner area. Also, the color values will not be the same when looking at the same ball from different directions, because the lighting (e.g. ceiling lights) will look different from a different point of view. If there are windows in your lab, the ball’s color values will change during the day because of the movement of the sun. So there are a number of problems to be aware of, and this is not even taking into account imperfections on the ball itself, like the manufacturer’s name printed on it, etc.

Many image sources return color values as RGB (red, green, blue). Because of the problems mentioned before, these RGB values will vary a lot for the same object, although its basic color has not changed. Therefore it is a good idea to convert all color values to HSV (hue, saturation, value) before processing and then mainly work with the more stable hue of a pixel.

The idea is to detect an area of hue values similar to the specified object hue that should be detected. It is important to analyze the image for a color “blob”, or a group of matching hue values in a neighborhood area. This can be achieved by the following steps:

a.Convert RGB input image to HSV.

b.Generate binary image by checking whether each pixel’s hue value is within a certain range to the desired object hue:

binaryi,j = | huei,j – hueobj | < H

c.For each row, calculate the matching binary pixels.

d.For each column, calculate the matching binary pixels.

e.The row and column counter form a basic histogram. Assuming there is only one object to detect, we can use these values directly:

444