Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Задани на лабораторные работы. ПРК / Professional Microsoft Robotics Developer Studio

.pdf
Скачиваний:
126
Добавлен:
20.04.2015
Размер:
16.82 Mб
Скачать

www.it-ebooks.info

Chapter 9: Adventures in Simulation

Figure 9-14

Now you have the tools you need to develop even more sophisticated vision and image processing algorithms in the simulation environment.

Improved Behavior

Now that the soccer players can identify the most important objects on the playing field, it is possible to implement a more intelligent player algorithm. The new code is in the

ProcessFrameAndDetermineBehavior method in BetterSoccerPlayer.cs. The SoccerPlayer service requests frames from the webcam periodically, and this method is called each time a frame is received.

The soccer player already keeps track of its state, whether it is currently waiting to play, playing, or penalized. A new Mode property has been added to the state to keep track of the current behavior while in the playing state. These modes are as follows:

FindBall: The ball is not currently in the image. Search for it by slowly spinning.

ReAlign: The ball is in the image but it is not lined up with the goal. Attempt to reposition.

Kick: The ball was being tracked but it has abruptly disappeared. This could mean that you are so close to it that it doesn’t show up on the camera. Move forward quickly to push it toward the goal.

Approach: The ball is in sight but too far away to do anything. Move closer.

453

www.it-ebooks.info

Part II: Simulations

Before you can make a decision about the next behavior to implement, you need to determine which objects have been found using the following code:

bool foundBall = _segments[0].Area > 10;

bool foundBlueGoal = (_segments[1].Area > 500) && (_segments[1].XSpan > 75); bool foundYellowGoal = _segments[2].Area > 500 && (_segments[1].XSpan > 75);

If the webcam managed to find more than 10 pixels that are soccer-ball-colored, then the ball has been located. It is a little more complicated with the goals because you want to identify the goal but not the colored posts at midfield. In this case, you must see at least 500 pixels that are the right color and they must cover a horizontal span greater than 75 pixels. This effectively eliminates the midfield posts while still allowing the goals to be seen from across the soccer field.

The soccer player starts out in the FindBall mode and executes the following code:

switch (_state.Mode)

{

case SoccerPlayerMode.FindBall: case SoccerPlayerMode.Approach:

{

if (!foundBall)

{

if (_previousArea > 4000)

{

// we are right on top of the ball so we can’t see it. Kick! _previousArea = 0;

_mainPort.Post(new ExecuteCommand( new ExecuteCommandRequest(

SimpleSoccerPlayerCommand.Kick)));

}

else

{

_mainPort.Post(new ExecuteCommand( new ExecuteCommandRequest(

SimpleSoccerPlayerCommand.FindBall))); _previousArea = 0;

}

}

If the ball isn’t in sight, then you check to see whether it was previously very big. When the robot gets very close to the ball, the camera actually penetrates the ball and it disappears in the Webcam view. This is analogous to a real-world scenario in which a camera gets very close to the ball and no longer recognizes it because it is suddenly very dark. In this case, you send a command to the main port to instruct the player to move forward quickly to kick the ball. This is often more effective than attempting to push the ball closer to the goal because it gives the other player less time to locate and steal the ball.

If the ball was not previously very large, then you are out of luck and must continue searching for it. A FindBall command is sent to the port:

else

{

if (((_state.Team == referee.TeamIdentifier.BlueTeam) && foundBlueGoal) ||

454

www.it-ebooks.info

Chapter 9: Adventures in Simulation

(_state.Team == referee.TeamIdentifier.RedTeam && foundYellowGoal) ||

(_segments[0].Area < 2000) || (_segments[0].Area > 4000))

{

//either the goal is lined up for a shot or the ball is

//too far away or too close to decide

_state.Mode = SoccerPlayerMode.Approach; _previousArea = (int)_segments[0].Area;

//calculate an offset from center that will be used to

//calculate left/right drive power

_state.BallOffset = 0.5f *

(_segments[0].X - 0.5f * _queryFrameRequest.Size.X) / _queryFrameRequest.Size.X;

_mainPort.Post(new ExecuteCommand( new ExecuteCommandRequest(

SimpleSoccerPlayerCommand.ApproachBall)));

}

else

{

_state.Mode = SoccerPlayerMode.ReAlign;

//we found the ball but we’re not lined up properly

//for a shot, realign

_mainPort.Post(new ExecuteCommand( new ExecuteCommandRequest(

SimpleSoccerPlayerCommand.Realign)));

}

}

break;

}

}

If the ball is in sight, then you check whether your target goal is also in sight. If it is or if the ball is very far away or very close, you continue to approach the ball. This gets you very close to the ball until you eventually kick it.

If the ball is in sight but the goal is not, then you attempt to reposition the player until the goal is in sight. This is done by sending the Realign command to the main port.

The implementation of each of these commands is in the ExecuteCommand handler. The ApproachBall and FindBall commands are fairly straightforward. FindBall sets the motor power so that the robot slowly spins. ApproachBall uses the position of the ball in the frame to set the left and right motor power to cause the player to move directly toward the ball.

The Kick command is handled by spawning an iterator to execute a more complicated move. The KickBall iterator is shown here:

IEnumerator<ITask> KickBall()

{

yield return Arbiter.Choice( _robotDrive.DriveDistance(0.1, 0.8), delegate(DefaultUpdateResponseType response) { },

(continued)

455

www.it-ebooks.info

Part II: Simulations

(continued)

delegate(Fault f) { }

);

_state.Mode = SoccerPlayerMode.FindBall;

}

This iterator sends a DriveDistance command to the differential drive to drive forward 0.1 meters at a fast speed. The statement that sets the player mode back to FindBall doesn’t execute until the DriveDistance command has been completed. The player remains in the Kick mode until the kick motion has finished.

The Realign command is also implemented with an iterator:

IEnumerator<ITask> RealignApproach()

{

yield return Arbiter.Choice( _robotDrive.RotateDegrees(-45, 0.5), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { }

);

yield return Arbiter.Choice( _robotDrive.DriveDistance(0.3, 0.5), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { }

);

yield return Arbiter.Choice( _robotDrive.RotateDegrees(70, 0.5), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { }

);

_state.Mode = SoccerPlayerMode.FindBall;

}

This is very similar to the KickBall iterator except that more motions are used. First the player is rotated 45 degrees to the left. Then it drives forward 0.3 meters and turns to the right by 70 degrees. This enables the player to circle around the ball while drawing closer to it. The ProcessFrameAndDetermineBehavior method does not change the mode of the player until this motion sequence has completed and the mode is set back to FindBall.

The most significant change in behavior is that the player no longer moves the ball in a random direction. It actually attempts to align itself with the appropriate goal before advancing the ball. Nor does it just keep pushing the ball, but instead attempts to kick it in the right direction.

The manifest simulatedsoccer.corobot.fourplayers.manifest.xml starts the soccer environment with four Corobot players. The red team runs the ProMRDSSimpleSoccerPlayer service, which is very similar to the sample service that Microsoft ships in the soccer package. The blue team runs the ProMRDSBetterSoccerPlayer service, which has the new code just described. In this scenario, the blue team consistently outscores the red team by a factor of 4 to 1, as shown in Figure 9-15.

456

www.it-ebooks.info

Chapter 9: Adventures in Simulation

Figure 9-15

Where to Go from Here

We’ve barely scratched the surface of a good soccer algorithm. Much more sophisticated behavior can be implemented using the same basic concepts illustrated. None of the players is aware of the position or movement of any of the other players. The vision algorithm can be enhanced to identify other players, and the player algorithm can take them into account.

In addition, no attempt is really made to find the position of the player other than to determine whether the target goal is in view. The vision algorithm can be enhanced to detect the posts at midfield, and their position and size in the frame can be used to determine the player’s position on the field. This enables the player to make more intelligent decisions about how it will approach the ball and line itself up

with the goal.

Finally, we haven’t done anything at all with the goalkeepers. They can do much more than just sit in front of the goal by constantly tracking the ball and taking action when it gets too close to the goal.

The ProMRDSBetterSoccerPlayer service represents an improvement over the basic service in the soccer package, but it is very primitive compared to what can be done.

The Corobot soccer services can provide a way for robotics clubs to organize a simulation-only competition to see who can come up with the best soccer player strategy.

In addition, the Corobot is available commercially and runs Microsoft Robotics Developer Studio Services, so with enough time and money it is even possible to build a real soccer field with real robots to run these algorithms; and then you’re well on your way to hosting your own RoboCup competition.

Exploring Virtual Worlds

The Holy Grail for many robotics researchers is to build a fully autonomous robot that can be set loose to explore its environment and build a map as it goes. It sounds simple enough, and you might even believe that this problem has already been solved based on what you have seen on television and in the

457

www.it-ebooks.info

Part II: Simulations

movies. In fact, we are possibly going to perpetuate this myth with the example in this section. Unfortunately, Simultaneous Localization and Mapping (SLAM) is still an active area of research, and many real-world problems are yet to be solved.

The ExplorerSim application illustrates several aspects of autonomous exploration. Figure 9-16 shows a screenshot of the ExplorerSim in action as the robot wanders around building a map. The simulated Pioneer 3DX robot is at the center of the simulation window. The top two windows show the map

(so far) and the view from the robot’s camera.

Figure 9-16

Background

In the original version of MRDS, Microsoft supplied an example of an exploration service in samples\ Misc\Explorer. This uses a laser range finder (LRF) to wander around at random. However, most people do not have a robot equipped with an LRF because they are quite expensive. The obvious solution was to use the simulator. (Do you have the impression yet that we like the simulator?)

The Explorer code uses the RotateDegrees and DriveDistance operations in the generic DifferentialDrive contract, but the original SimulatedDifferentialDrive didn’t support these. Another important piece of the puzzle was obtaining the robot’s pose — its position and orientation. A modified version of the SimulatedDifferentialDrive service solves both of these problems.

None of the Simulation Tutorials included with MRDS provide environments that are similar to a typical home or office. This is where the Maze Simulator comes in, as it enables you to easily build a virtual world for the robot to explore simply by creating a floor plan.

458

www.it-ebooks.info

Chapter 9: Adventures in Simulation

With the updated DifferentialDrive service and the Maze Simulator, you have the necessary components to begin exploring and mapping using the simulator.

The Modified DifferentialDrive Service

There isn’t a lot to say about the modified version of the SimulatedDifferentialDrive service other than that it is required to run ExplorerSim. You can view the source code in ProMRDS\Chapter9\ DifferentialDriveTT.

If you are recompiling ExplorerSim, you must compile the services in the following order because of the dependencies:

1.

2.

3.

DifferentialDriveTT

MazeSimulator

ExplorerSim

The original reason for creating this service was to implement the RotateDegrees and DriveDistance operations. Although these operations were added to MRDS V1.5, there is a problem when the robot is driven at high speed because it sometimes misses the stopping point and keeps on going. Therefore, we still use our own version of the service.

The modified Drive service also supplies the current pose of the robot, allowing the mapping to work. This is clearly cheating! In real life, a robot rarely knows where it is. In fact, this is one of the problems that SLAM is supposed to solve (the “localization” step).

You might immediately think of using a GPS, but this isn’t a suitable solution. GPS receivers don’t work indoors; and even if they did, they are only accurate to the nearest 5 or 10 meters. In an office environment, a 5-meter error could put you in the office next door, or even outside the building.

Changing a pre-defined entity in the simulator is a little complicated. Although the source for all the entities is supplied in samples\simulation\Entities\Entities.cs, you cannot just edit this file and recompile it because it is built into the simulator.

Consequently, a new entity has to be defined with a different name. In SimDiffDriveEntities.cs you will find the new differential drive entity:

[Category(“DifferentialDrive”)]

[BrowsableAttribute(false)] // prevent from being displayed in NewEntity dialog public class TTDifferentialDriveEntity : VisualEntity

The Update method for this entity is responsible for implementing the RotateDegrees and DriveDistance operations, as shown previously in Chapter 6, so it is not covered here.

Changing the name of this entity has a ripple effect, which forces you to create new robot entities as well because they are built on the differential drive:

[DataContract]

[DataMemberConstructor]

(continued)

459

www.it-ebooks.info

Part II: Simulations

(continued)

[CLSCompliant(true)]

//A new entity is required because it uses a new

//version of the Simulated Differential Drive public class TTPioneer3DX : TTDifferentialDriveEntity

The final important change is in the UpdateStateFromSimulation method in SimulatedDifferentialDrive.cs. The current position of the robot is inserted into the left and right wheel motor states. This is a hack, but it works. Note that the same pose is inserted for both wheels, but in fact the wheels are on either side of the robot’s center by half of the wheelbase. Because you don’t care about the actual wheel positions, the position of the robot’s center is good enough:

void UpdateStateFromSimulation()

{

if (_entity != null)

{

_state.TimeStamp = DateTime.Now; _state.LeftWheel.MotorState.CurrentPower =

_entity.LeftWheel.Wheel.MotorTorque; _state.RightWheel.MotorState.CurrentPower = _entity.RightWheel.Wheel.MotorTorque;

//Construct a new pose Pose p = new Pose(); Vector3 posn = new Vector3(

_entity.Position.X, _entity.Position.Y, _entity.Position.Z);

//We need to create a quaternion for the orientation

Quaternion q = new Quaternion();

//Rotations are about the Y axis because we are moving in 2D

//which happens to be the X-Z plane. The Y axis is (0,1,0).

//Figure out the orientation angle in Radians

Double angle = _entity.Rotation.Y * Math.PI / 180;

//Now create the quaternion q.X = 0;

q.Y = (float) Math.Sin(angle/2); q.Z = 0;

q.W = (float) Math.Cos(angle/2);

//Finally, insert the values into the pose p.Position = posn;

p.Orientation = q;

//NOTE: This is NOT quite correct!!!

//The actual pose of the wheels should differ by the wheelbase. _state.LeftWheel.MotorState.Pose = p; _state.RightWheel.MotorState.Pose = p;

}

}

Now it is a simple matter of getting the robot’s state whenever you want to know its pose.

460

www.it-ebooks.info

Chapter 9: Adventures in Simulation

The Maze Simulator

The Maze Simulator was written in the early days of the MRDS V1.0 Community Technology Previews. It enables fairly complex environments to be created, such as the one shown in Figure 9-17. These are much more interesting to drive around in than the Simulation Tutorials.

Figure 9-17

To make a new environment, you simply create a bitmap image with the floor plan, something like what is shown Figure 9-18, and then edit the config file for the Maze Simulator to specify the new maze image.

Figure 9-18

The config file is called ProMRDS\Config\MazeSimulator.config.xml and an example is shown here, followed by a bit of explanation:

<?xml version=”1.0” encoding=”utf-8”?> <MazeSimulatorState xmlns:s=”http://www.w3.org/2003/05/soap-envelope”

xmlns:wsa=”http://schemas.xmlsoap.org/ws/2004/08/addressing”

xmlns:d=”http://schemas.microsoft.com/xw/2004/10/dssp.html”

(continued)

461

www.it-ebooks.info

Part II: Simulations

(continued)

xmlns=”http://schemas.tempuri.org/2006/08/mazesimulator.html”> <Maze>ModelLarge.bmp</Maze> <GroundTexture>cellfloor.jpg</GroundTexture>

The Maze tag specifies the filename for the maze bitmap. The file itself can be created as a Windows bitmap (.BMP file) using Windows Paint. It can actually be any image format, but only GIF and BMP guarantee that no information is lost, so don’t use JPG format because it uses lossy compression.

The example configuration file contains just a filename with no path. If you want to specify an explicit path, just begin the filename with a backslash (or a slash), e.g., \ProMRDS\Config\MyMaze.bmp. If there is no leading slash, then the path will default to the store\media\Maze_Textures folder.

You can also specify a texture for the floor. This texture bitmap must reside in the store\media directory (or a subfolder of it).

Next in the configuration file are the Wall Textures and Wall Colors. The maze bitmap is examined by the Maze Simulator and used to create blocks in the simulated world based on the colors of the pixels in the bitmap. (This should be obvious from Figure 9-19, although you can’t see the colors in the black-and- white screenshot.) In other words, the walls are color-coded and you can have different types of walls.

The Maze Simulator divides the RGB color space into 16 different colors, giving you 15 usable block types because one color has to be the floor. The list of colors recognized by the Maze Simulator is shown in the following example. (If you want to see the standard 16 colors, you can look at ColorPalette16. bmp in the Maze Simulator source directory.) The values from 0 to 7 are the fully saturated colors; 8 to 15 are their darker equivalents. The Maze Simulator matches pixel values in the maze bitmap to the closest of these 16 colors in order to select one of the 16 block types.

public enum BasicColor : byte

{

Black

= 0,

Red

= 1,

Lime

= 2,

Yellow

= 3,

Blue

= 4,

Magenta

= 5,

Cyan

= 6,

White

= 7,

DarkGrey

= 8,

Maroon

= 9,

Green

= 10,

Olive

= 11,

Navy

= 12,

Purple

= 13,

Cobalt

= 14,

Grey

= 15

}

462