Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Задани на лабораторные работы. ПРК / Professional Microsoft Robotics Developer Studio

.pdf
Скачиваний:
126
Добавлен:
20.04.2015
Размер:
16.82 Mб
Скачать

www.it-ebooks.info

Chapter 9: Adventures in Simulation

The floor color is always taken to be the value of the top-left pixel. In Figure 9-19, the top-left pixel is white because there is a white border around the outside wall, which is black. It is a good idea to have a wall enclosing the entire environment so that your robot can’t run away!

For each of the 16 pixel colors, you can specify either a texture file or a color as an RGB value for the corresponding wall blocks. Textures always override colors. To use a color, leave the corresponding texture empty, i.e., <string />.

The values in the following config file equate to cyan (light blue) for block type 0 (which is black pixels in the maze bitmap) and red for block type 1 (which is red pixels). Ignore the fact that the configuration file says XYZ — that is just a side-effect of using a Vector3. In this case, however, the cyan color is overridden by the water.jpg texture. Cyan is only there in case the texture file is missing.

<WallTextures> <string>Maze_Textures/water.jpg</string> <string />

... (Up to 16 strings) </WallTextures> <WallColors>

<Vector3>

<X xmlns=”http://schemas.microsoft.com/robotics/2006/07/physicalmodel.html”>0</X> <Y xmlns=”http://schemas.microsoft.com/robotics/2006/07/physicalmodel.html”>255</Y> <Z xmlns=”http://schemas.microsoft.com/robotics/2006/07/physicalmodel.html”>255</Z>

</Vector3> <Vector3>

<X xmlns=”http://schemas.microsoft.com/robotics/2006/07/physicalmodel.html”>255</X> <Y xmlns=”http://schemas.microsoft.com/robotics/2006/07/physicalmodel.html”>0</Y> <Z xmlns=”http://schemas.microsoft.com/robotics/2006/07/physicalmodel.html”>0</Z>

</Vector3>

... (Up to 16 Vector3s) </WallColors>

Note that there is no requirement to use the same wall colors as the pixel colors in the maze bitmap, although you will probably confuse yourself if you make them different!

Each of the block types has an associated height in the HeightMap (which is scaled as explained below) and a mass (in kilograms) in the MassMap. If you specify the mass as zero, then it means infinite, and the robot will never be able to move blocks of this type. It is fun to create blocks that can be pushed around.

<HeightMap> <float>0.01</float> <float>1.5</float>

... (Up to 16 floats) </HeightMap>

<MassMap> <float>0</float> <float>2</float>

... (Up to 16 floats) </MassMap>

463

www.it-ebooks.info

Part II: Simulations

The next few properties control the geometry. The UseSphere flags, one per block type, enable you to create balls instead of blocks. Note in the following example that the second block type is a sphere, and from the MassMap just shown it has a mass of 2kg and a size in the HeightMap of 1.5 meters (possibly scaled). You can see some balls in Figure 9-18 with a checkerboard texture applied to them. When you run the Maze Simulator you can push them around.

<UseSphere> <boolean>false</boolean> <boolean>true</boolean>

... (Up to 16 booleans) </UseSphere> <WallBoxSize>0.98</WallBoxSize> <GridSpacing>0.1</GridSpacing> <HeightScale>0.1</HeightScale> <DefaultHeight>10</DefaultHeight> <SphereScale>1</SphereScale>

The remaining parameters are used to scale the blocks so that you can get a realistic size without having to create an enormous maze bitmap.

The WallBoxSize is deliberately a little less than the GridSpacing so that different blocks do not overlap, although adjacent blocks of the same type are combined. The simulator frame rate decreases as more objects are added to the simulation. It is very easy to create hundreds or thousands of blocks using the Maze Simulator. It tries its best to combine blocks, but there might still be a serious impact on performance. The final number of blocks is displayed in the console window when the Maze Simulator starts. If it is running very slowly, try to simplify your environment using only horizontal and vertical lines (East-West and North-South) in the maze bitmap. Lines that are at an angle or curved are approximated by a lot of separate blocks. In the top-right corner of Figure 9-18, you can see this in the light-colored wall, which is at 45 degrees.

Finally, you can select either the Pioneer 3DX robot or the LEGO NXT robot using the RobotType and specify where it should start in terms of grid cells using RobotStartCellRow and RobotStartCellCol. The position (0, 0) is at the center of the maze:

<RobotStartCellRow>0</RobotStartCellRow> <RobotStartCellCol>0</RobotStartCellCol> <RobotType>Pioneer3DX</RobotType>

</MazeSimulatorState>

That’s it! You have defined all of the necessary parameters to create a virtual maze-like environment. All you need to do is use the Dashboard to drive your robot around and see whether you like your new construction. You can use ProMRDS\chapter9\mazesimulator\mazesimulator.cmd to start the Maze Simulator and the Dashboard together.

The ExplorerSim Application

With a maze already built, you can now run the ExplorerSim program to let the robot explore and build a map. The batch file to run ExplorerSim is ProMRDS\chapter9\explorersim\explorersim.cmd.

464

www.it-ebooks.info

Chapter 9: Adventures in Simulation

When ExplorerSim starts up, it also starts a Dashboard. You don’t need to use the Dashboard unless the robot gets stuck. The exploration algorithm is not much more than a random walk, and sometimes

the robot will oscillate backward and forward as though it can’t make up its mind. In that case, you can drive it somewhere else, although you will find that you are fighting with ExplorerSim!

The main reason for running the Dashboard is that it can be used to display the LRF data and show the view from the webcam on the robot. However, ExplorerSim does not use the webcam images, only the LRF data.

ExplorerSim also creates a Map window, which, after all, is the main purpose of exploring. An example is shown in Figure 9-19 for a partially completed map that corresponds to the maze bitmap in Figure 9-18. Compare the two and see whether you can see the similarities.

Figure 9-19

ExplorerSim can create three different types of maps, which use different algorithms for drawing new data into the map. It is beyond the scope of this book to explain mapping algorithms, but a brief introduction might be useful.

The map in Figure 9-19 is what is known as an occupancy grid. This is a type of metric map in which the world is broken up into a set of grid cells of equal size. In the case of Figure 9-19, each pixel represents a five-centimeter square. The values in the cells represent the probability that the cell is vacant, and the common convention is that black means occupied and white means empty. Shades of gray, therefore, indicate how uncertain the robot is of the cell’s occupancy. Initially, the whole map is set to gray (a 50% probability of being occupied).

The map dimensions can be set by editing the config file ProMRDS\Config\ExplorerSim.config

.xml, shown here:

<?xml version=”1.0” encoding=”utf-8”?>

<State xmlns:s=”http://www.w3.org/2003/05/soap-envelope” xmlns:wsa=”http://schemas.xmlsoap.org/ws/2004/08/addressing” xmlns:d=”http://schemas.microsoft.com/xw/2004/10/dssp.html” xmlns=”http://schemas.microsoft.com/robotics/2007/06/explorersim.html”>

<Countdown>0</Countdown> <LogicalState>Unknown</LogicalState> <NewHeading>0</NewHeading>

(continued)

465

www.it-ebooks.info

Part II: Simulations

(continued)

<Velocity>0</Velocity> <Mapped>false</Mapped>

<MostRecentLaser>2007-07-14T18:25:32.2205888+10:00</MostRecentLaser> <X>0</X>

<Y>0</Y> <Theta>0</Theta> <DrawMap>true</DrawMap>

<DrawingMode>BayesRule</DrawingMode> <MapWidth>24</MapWidth> <MapHeight>18</MapHeight> <MapResolution>0.05</MapResolution> <MapMaxRange>7.99</MapMaxRange> <BayesVacant>160</BayesVacant> <BayesObstacle>96</BayesObstacle>

</State>

Notice that you can set the MapWidth and MapHeight (in meters) as well as the Resolution (which is the grid cell size).

Three drawing modes can be specified using the DrawingMode parameter:

Overwrite: This simply draws a ray in the map from the robot’s current position out to the point where the laser hits an obstacle. It does this for all range scans, which are typically one degree apart with a field of view of 180 degrees. As the robot moves, the map is overwritten with new data. Note that if the robot lost track of its position, then it would start drawing map information in the wrong place and potentially destroy a perfectly good map! The overwrite method only works when you have near-perfect pose information, such as in simulation.

Counting: This is a little more robust. Each time the robot processes a laser scan, it increments the cell values along the laser ray and decrements the value at the point where the ray hits an obstacle. Eventually, if the robot sees the same things often enough, the map will “develop,” just like developing a film — the obstacles become darker and the free space becomes lighter.

Bayes Rule: Bayes Rule is based on probabilities. It enables the probability value in a cell to be updated based on new information and a belief in the accuracy of the new information. For this particular exercise, there is not much difference between Bayes Rule and the Counting method. However, in the real world the robot would have a Measurement model that assigned probability values to new range-scan data and a Motion model that estimated the true position after each move. For more information about this field, see Probabilistic Robotics by Thrun, Burgard, and Fox (MIT Press, 2005).

The last point to make about ExplorerSim, which will become quite obvious if you watch it for a while, is that it has a very poor exploration algorithm. It takes a very long time for the robot to explore the entire maze because it keeps covering ground that it has already seen.

Where to Go from Here

There are literally dozens of different directions that you could follow from here, limited only by your imagination.

466

www.it-ebooks.info

Chapter 9: Adventures in Simulation

The robot has a webcam, but the exploration is not using vision. However, using a camera introduces complications with perspective, and it is not an easy task to measure distances accurately. One way to process images is using the RoboRealm software (www.roborealm.com), available free and compatible with MRDS.

You could also try using different sensors other than the LRF. In Chapter 6, there is a Simulated IR sensor, but you would need several of these for map building. Bear in mind that most LRFs supply hundreds of data points in each scan, which you can use to add new information to the map. Some of the information is redundant, but it is not possible to get the same density of information with IR.

Raúl Arrabales Moreno has written a simulated sonar sensor service for the Pioneer 3DX and adapted the ExplorerSim to use it. The code is available on the Conscious Robots website (www.conscious-robots.com).

The exploration algorithm in the ExplorerSim is not good (it was inherited from the original Explorer sample). Exploration algorithms are another active research area. Maze solving is often taught in classes on artificial intelligence, and it is a form of exploration. You could definitely improve the exploration process by coming up with an approach that tries to direct the robot toward fresh, unexplored areas. You just need to realize that sometimes you will find a wall in your way!

Real robots don’t always know their position, so there are opportunities for SLAM researchers to develop algorithms without using the pose information from the simulator. This is a difficult task.

Lastly, we have not dealt with moving obstacles at all. People wandering around a building might get in the way of robots. It doesn’t make any sense to put an obstacle in the map if it is a person, because it probably won’t be there the next time you come back.

Summary

This chapter has provided several examples of simulation projects. The sumo and soccer environments provide an opportunity to immediately code behavior for a robot without having to define the entire environment. The hexapod project shows how to implement a walking, articulated robot that could be evolved into a quadruped or biped walking robot. The Explorer project shows how to create a complete simulation environment that can be navigated by a robot model.

It is hoped that these projects have provided some inspiration for your own simulation ideas and some concrete examples that will aid you in your development.

467

www.it-ebooks.info

www.it-ebooks.info

Part III

Visual Programming

Language

Chapter 10: Microsoft Visual Programming Language Basics

Chapter 11: Visually Programming Robots

Chapter 12: Visual Programming Examples

www.it-ebooks.info

www.it-ebooks.info

Microsoft Visual

Programming

Language Basics

The Microsoft Visual Programming Language (VPL) is a new application development environment designed specifically to work with DSS services. Programs are defined graphically in data flow diagrams rather than the typical sequence of commands and instructions found in other programming languages. VPL provides an easy way to quickly define how data flows between services. It is useful for beginning programmers because they can quickly specify their intent, but it is also well suited to help with prototyping and code generation for more experienced programmers. It is also useful for specifying robotic orchestration services within the context of the Microsoft Robotics Developer Studio SDK, but it can be used outside of robotics as well.

This chapter covers what it means to work with a data flow language, how to specify a data flow diagram using the basic activities provided by VPL, and how to debug and run it. You won’t find any robots in this chapter but you will learn how to use VPL to control robots in Chapter 11.

What Is a Data Flow Language?

Many people learn how to program using an imperative language such as BASIC or C++. Such a language uses control statements to modify program state. This matches the underlying hardware implementation of the CPU well because it is built to execute machine code statements that modify memory.

When a program is dataor event-driven and has different parts that execute asynchronously, the imperative programming model can become inefficient and difficult to use. The following example shows why this might sometimes be true.

Imagine a program as a city grid. Each intersection represents some processing that must be done, and cars on the roads between intersections represent data, as shown in Figure 10-1.

www.it-ebooks.info

Part III: Visual Programming Language

C

B A

Figure 10-1

A typical imperative program might take the following form:

For each intersection

While not enough cars are waiting Sleep for a time.

Do some calculations and call the next intersection routine with the results

The calculations done at each intersection can be arbitrarily complex and may depend on the state of multiple cars waiting at the intersection. This approach can lead to inefficiency because the instructions associated with each intersection must periodically run to determine whether enough cars are waiting. In addition, because cars may arrive at each intersection at an arbitrary time, race conditions can arise if the processing instructions at each intersection are not carefully defined.

In a data flow language, the emphasis is on following the cars rather than processing the intersections. A data flow diagram for the city example above would look much like the city grid, with the roads representing data connections and the intersections representing processing activities. None of the intersection code runs until it is presented with data.

Let’s say that data enters the diagram at the bottom and approaches intersection A. The code for intersection A runs to process the data and puts the resulting state on the way to intersection B. The code for intersection B then runs and sends the modified state to intersection C. Intersection C is programmed to wait until data is available on at least two of its roads before passing the resulting data off to the left and out of the grid. Now that there is no data in the grid, no code runs until more data is available.

Multiple cars in the grid represent multiple data packets being processed simultaneously. Because the calculations at each intersection depend only on the cars waiting for it, they can run at the same time as another intersection.

It is simpler to write a program for each intersection than it is to write instructions for managing the entire grid. Similarly, it is simpler and more efficient to write small segments of code that respond to data inputs than it is to write code that attempts to control every intersection simultaneously.

472