Задани на лабораторные работы. ПРК / Professional Microsoft Robotics Developer Studio
.pdf
www.it-ebooks.info
Chapter 12: Visual Programming Examples
viewed edge-on. Objects rendered using triangles stay much sharper, so the authors decided to build a utility that converts a 2D line drawing into a 3D mesh that can be inserted into the environment.
You can skip this section if you are only interested in the VPL implementation of the line-following algorithm.
Using the DSS Command-Line Tools
The source code for this utility is in ProMRDS\Chapter12\LineMesh. This utility shows how to use the DSS command-line tools to build console applications with a consistent command-line parameter style. In the file LineMesh.cs, a class called CommandArguments is defined that contains all of the variables to be filled in by the command-line parameters. Each member variable has an attribute that contains information about how the parameter can appear on the command line, whether it is required, whether it may appear at most once or many times, and some help text for the parameter. The long strings in the following code are spread across several lines due to formatting constraints:
class CommandArguments
{
internal CommandArguments()
{
}
[Argument(ArgumentType.Required, ShortName = “i”, LongName = “input”, HelpText = “image file to convert to a .obj mesh. The wildcard characters
* and ? are allowed.”)] public string inFilename;
[Argument(ArgumentType.AtMostOnce, ShortName = “o”, LongName = “output”, HelpText = “Optional output filename. A .obj extension is not required.
If this parameter is not specified, the output file will default to the same name as the input file but with a .obj extension.”)]
public string outFilename;
[Argument(ArgumentType.AtMostOnce, ShortName = “x”, LongName = “xOffset”, HelpText = “Optional X offset for the mesh”)]
public string xOffset;
[Argument(ArgumentType.AtMostOnce, ShortName = “z”, LongName = “zOffset”, HelpText = “Optional Z offset for the mesh”)]
public string zOffset;
[Argument(ArgumentType.AtMostOnce, ShortName = “s”, LongName = “scale”, HelpText = “Optional scale multiplier for the mesh. The default size is
10 cm per pixel.”)] public string scale;
[Argument(ArgumentType.AtMostOnce, ShortName = “h”, LongName = “height”, HelpText = “Optional height multiplier for the mesh. The default height is
10 cm.”)]
public string height;
}
543
www.it-ebooks.info
Part III: Visual Programming Language
You can see from the preceding code that this utility supports a required input filename; all of the remaining parameters are optional. The output filename can be used to name the output mesh and material files. The X and Z offsets are used to specify where the mesh appears relative to the center of the entity with which it is associated. The scale parameter is used to define how large each pixel in the image appears in the Simulation Environment. Finally, the height parameter scales how tall the line geometry is. How tall, you ask? The pixels in the input image are converted to boxes in the Simulation Environment, rather than just flat polygons, to make it easier to position the lines above the ground without a gap between the top of the line and the ground.
The command-line parameters are processed in the ParseCommandArgs method after the default values are set in the call to Parser.ParseArgumentsWithUsage. This method assigns values to each of the CommandArguments member variables that are referenced on the command line:
private static CommandArguments ParseCommandArgs(string[] args)
{
CommandArguments parsedArgs = new CommandArguments(); parsedArgs.inFilename = null;
parsedArgs.outFilename = null; parsedArgs.xOffset = “0”; parsedArgs.zOffset = “0”; parsedArgs.scale = “1”; parsedArgs.height = “1”;
if (!Parser.ParseArgumentsWithUsage(args, parsedArgs))
{
return null;
}
return parsedArgs;
}
Generating a Mesh from an Image File
The image file is converted in the ConvertFile method. The input file is read into a bitmap object. The utility supports .BMP, .GIF, .JPG, .EXIF, .PNG, and .TIFF file formats because that is what the .NET library bitmap object supports. It is better to use a lossless format such as .BMP or .TIFF because lossy formats like .JPG introduce pixels that have varying colors due to compression artifacts, and more unique colors means that more materials need to be generated, which can make the output files very large.
The code steps through each pixel in the bitmap. It checks whether it has seen that color before and if not, generates a material definition for that color pixel that goes in the .mtl output file. The cube geometry for that pixel is then generated and output to the .obj file, as shown in the following code:
for (int z = 0; z < input.Height; z++)
{
for (int x = 0; x < input.Width; x++)
{
Color pixel = input.GetPixel(x, z);
if ((pixel.R == 0) && (pixel.G == 0) && (pixel.B == 0)) continue;
string matName = GetMaterialName(pixel);
544
www.it-ebooks.info
Chapter 12: Visual Programming Examples
if (!colors.ContainsKey(matName))
{
colors.Add(matName, pixel); mtlWriter.WriteLine(“newmtl “ + matName);
mtlWriter.WriteLine(string.Format(“Kd {0:0.00} {1:0.00} {2:0.00}”, pixel.R / 255.0, pixel.G / 255.0, pixel.B / 255.0));
mtlWriter.WriteLine(“Ka 0.00 0.00 0.00”);
}
// find the largest rectangular area covered by this color int xr = 1;
int zr = 1;
FindRectangle(x, z, input, out xr, out zr); ClearRectangle(x, z, xr, zr, input);
objWriter.WriteLine(string.Format(“v {0} {1} {2}”,x * pixelSize + xOffset, yOffset, zr * pixelSize + zOffset));
objWriter.WriteLine(string.Format(“v {0} {1} {2}”,xr * pixelSize + xOffset, yOffset, zr * pixelSize + zOffset));
objWriter.WriteLine(string.Format(“v {0} {1} {2}”,x * pixelSize + xOffset, -yOffset, zr * pixelSize + zOffset));
objWriter.WriteLine(string.Format(“v {0} {1} {2}”,xr * pixelSize + xOffset, -yOffset, zr * pixelSize + zOffset));
objWriter.WriteLine(string.Format(“v {0} {1} {2}”,x * pixelSize + xOffset, -yOffset, z * pixelSize + zOffset));
objWriter.WriteLine(string.Format(“v {0} {1} {2}”,xr * pixelSize + xOffset, -yOffset, z * pixelSize + zOffset));
objWriter.WriteLine(string.Format(“v {0} {1} {2}”,x * pixelSize + xOffset, yOffset, z * pixelSize + zOffset));
objWriter.WriteLine(string.Format(“v {0} {1} {2}”,xr * pixelSize + xOffset, yOffset, z * pixelSize + zOffset));
objWriter.WriteLine(“usemtl “ + matName); objWriter.WriteLine(string.Format(“f {0}/1/1 {1}/2/2 {2}/4/3 {3}/3/4”,
1 + vertexOffset, 2 + vertexOffset,
4 + vertexOffset, 3 + vertexOffset)); objWriter.WriteLine(string.Format(“f {0}/3/5 {1}/4/6 {2}/6/7 {3}/5/8”,
3 + vertexOffset, 4 + vertexOffset,
6 + vertexOffset, 5 + vertexOffset)); objWriter.WriteLine(string.Format(“f {0}/5/9 {1}/6/10 {2}/8/11 {3}/7/12”,
5 + vertexOffset, 6 + vertexOffset,
8 + vertexOffset, 7 + vertexOffset)); objWriter.WriteLine(string.Format(“f {0}/7/13 {1}/8/14 {2}/10/15 {3}/9/16”,
7 + vertexOffset, 8 + vertexOffset,
2 + vertexOffset, 1 + vertexOffset)); objWriter.WriteLine(
string.Format(“f {0}/2/17 {1}/11/18 {2}/12/19 {3}/4/20”, 2 + vertexOffset, 8 + vertexOffset,
6 + vertexOffset, 4 + vertexOffset)); objWriter.WriteLine(
string.Format(“f {0}/13/21 {1}/1/22 {2}/3/23 {3}/14/24”, 7 + vertexOffset, 1 + vertexOffset,
3 + vertexOffset, 5 + vertexOffset)); vertexOffset += 8;
}
}
545
www.it-ebooks.info
Part III: Visual Programming Language
Rather than generate a cube object for each pixel, the code tries to be smart in identifying rectangular regions that have the same color, and a cube is generated to cover the region. The pixels in the region are cleared to black so that they are not processed again later. No geometry is generated for black pixels.
Figure 12-23 shows a bitmap that you could use to generate lines in the Simulation Environment.
Figure 12-23
White is used for the line to be followed because it provides a high brightness contrast with a dark floor material. A typical command line to generate a mesh from such a bitmap would be as follows:
Linemesh /i:line.bmp /s:0.2
This command generates two output files called line.obj and line.mtl. The default size for each pixel is 10 cm cubed. The /s parameter scales this by 0.2 so that each pixel is only 2 cm in each dimension.
The resulting mesh is shown in the Simulation Environment in Figure 12-24.
Figure 12-24
546
www.it-ebooks.info
Chapter 12: Visual Programming Examples
The mesh can be positioned vertically so that it barely protrudes from the ground by either adjusting the mesh transform for the entity or by moving the entity lower. The result is shown in Figure 12-25.
Figure 12-25
Building a Brightness Sensor
The next big problem that needs to be solved is that no simulated brightness sensor is shipped with the MRDS SDK. That means you need to implement your own!
You can find the source code for this sensor in the ProMRDS\Chapter12\SimPhotoCell directory. It was very easy to implement this sensor by making some minor modifications to the SimulatedIR sensor presented in Chapter 6.
The service was changed to work with a CameraEntity, rather than a CorobotIREntity. The service periodically retrieves frames from the camera, converts the pixels in the image to luminance values, and then finds the average luminance for the entire image. When the service receives a Get message, it simply returns the most recently calculated brightness (luminance) value.
The code to retrieve the camera frame and calculate the brightness value is shown here:
private IEnumerator<ITask> CheckForStateChange(DateTime timeout)
{
while (true)
{
if (_entity != null)
{
// get the image from the CameraEntity
(continued)
547
www.it-ebooks.info
Part III: Visual Programming Language
(continued)
PortSet<Bitmap, Exception> result = new PortSet<Bitmap, Exception>(); _entity.CaptureScene(
System.Drawing.Imaging.ImageFormat.MemoryBmp, result); double brightness = 0;
yield return Arbiter.Choice(result, delegate(Bitmap b)
{
int count = 0;
for (int y = 0; y < b.Height; y++)
{
for (int x = 0; x < b.Width; x++)
{
Color c = b.GetPixel(x, y); brightness += (c.R * 0.3) +
(c.G * 0.59) + (c.B * 0.11);
count++;
}
}
// calculate the average brightness, scale it to (0-1) range brightness = brightness / (count * 255.0);
},
delegate(Exception ex)
{
}
);
if (Math.Abs(brightness - _previousBrightness) > 0.05)
{
// send notification of state change _state.RawMeasurement = brightness; _state.NormalizedMeasurement =
_state.RawMeasurement / _state.RawMeasurementRange; _state.TimeStamp = DateTime.Now;
_state.Pose = _entity.State.Pose; base.SendNotification<Replace>(_submgrPort, _state); _previousBrightness = (float)brightness;
}
}
yield return Arbiter.Receive(false, TimeoutPort(200), delegate { });
}
}
This is implemented with an iterator that runs in an endless loop. If the camera entity that this service is partnered with has been initialized in the Simulation Environment, then _entity will be non-null. In this case, CaptureScene is called on the entity to retrieve the latest rendered frame. A CameraEntity used with this service will typically have a very small number of pixels and a small field of view to simulate the limited field of view of a typical photo sensor.
548
www.it-ebooks.info
Chapter 12: Visual Programming Examples
When the image has been successfully retrieved, the code steps through the image calculating the brightness value for each pixel. When all of the brightness values have been summed, the average is calculated and scaled to lie within the range 0.0 to 1.0.
If the brightness has changed substantially since the previous brightness was calculated, then the service state is updated with the new brightness value and a notification is sent to other services. Finally, the handler waits for 200 milliseconds and then repeats the whole process.
Two different services are actually built from the same SimPhotoCell.cs file: one for the left sensor and one for the right sensor. Each service has a unique contract identifier even though it uses the same code. This is to work around a bug in the 1.5 version of VPL mentioned in the previous chapter whereby VPL does not work properly with two services of the same type specified in the same manifest.
Building the Simulation Environment
This section goes into more detail about how the line follower Simulation Environment was built. If you are only interested in the VPL line-follower algorithm, you can skip this section.
The line follower Simulation Environment began life as the Simulation Tutorial 1 environment, shown in Figure 12-26.
Figure 12-26
549
www.it-ebooks.info
Part III: Visual Programming Language
Using the simulation editor, the sphere was deleted and the kinematic flag was set in the state of the box. This enables us to position the cube without regard to forces acting on it from the physics engine. The cube was moved below the ground plane so that it wouldn’t interfere with our line-following robot. Then a custom mesh called line.obj was specified in its state, which caused our line mesh to be loaded and associated with the box. The transform on the mesh was modified to raise the mesh until it just showed above the ground. Finally, the texture map applied to the ground was changed to Carpet_ gray.jpg to provide more contrast between the line and the ground. The combined physics and visual view of the resulting scene is shown in Figure 12-27.
Figure 12-27
To insert a robot in the environment, you save the scene and then load the iRobot.Create.Simulation
.Manifest.xml manifest into the simulator. Again, using the Simulation Editor, the iRobot Create entity is cut from the scene. The previous scene is reloaded and the iRobot Create pasted and placed in the proper position.
Now you have a robot but it doesn’t have any photocell sensors on it. The SimPhotoCell service is designed to run with a CameraEntity so a CameraEntity named LeftCam is added to the environment as a child of the iRobot Create Entity. This camera is given a resolution of only four pixels by four pixels and its near plane is adjusted to be 0.001 meters, while the far plane is set to 50 meters. The field of view is set to only five degrees.
550
www.it-ebooks.info
Chapter 12: Visual Programming Examples
The camera is positioned 7 cm left of the robot’s center line and 14 cm forward of the center. The camera is copied and then pasted as a child of the iRobot Create. The copy is given the name “RightCam” and positioned the same distance forward and 7 cm right of the robot’s center line. Figure 12-28 shows the position of the cameras in the iRobot Create.
Figure 12-28
The cameras are rotated to point down at the ground and are set as real-time cameras. Finally, the ServiceContract field of each of the cameras is set to the contract identifier of the SimPhotoCell service that would be partnered with it. The entire scene is then saved. The simulation state file is called LineScene.xml and the manifest that runs all of the services is called LineScene.manifest.xml.
Now the simulation scene is ready to be controlled by a VPL diagram.
The Line-Follower VPL Diagram
This diagram is much simpler than the previous few examples covered in this chapter. It can be found in the ProMRDS\Chapter12\FollowLine directory. It is shown in its entirety in Figure 12-29.
551
www.it-ebooks.info
Part III: Visual Programming Language
Figure 12-29
The LeftPhotoCell and RightPhotoCell activities represent the services partnered with the sensors on the simulated iRobot Create. When they send a notification indicating that the brightness value has changed, the LeftLine or RightLine state variables are set to true if the brightness value is greater than 0.7. This value was determined experimentally by driving the robot over the line and looking at the SimPhotoCell sensor state using a browser.
Each time a notification is received, the diagram must determine how to set the motor speed for the left and right wheels. The robot wants to have its right sensor over the line, with its left sensor not over the line. If this is the case, the diagram sends a SetDriveSpeed message to the generic differential drive, which causes it to move straight ahead. If both sensors are over the line, then you assume that the robot has strayed to the right too far and you move it back left by setting the left speed to 0.05 and the right speed to 0.2. If neither sensor is over the line, then you assume that the robot has strayed too far to the left and you move it back by setting the right speed to 0.05 and the left speed to 0.2. Finally, if the left sensor is over the line but the right sensor is not, the robot is too far to the right and you move it back to the left by setting the left wheel speed to 0.05 and the right wheel speed to 0.2.
As long as sensor notifications continue to arrive in a timely manner, this is all that is required to keep the robot traveling along the line. In some cases, the line might turn so abruptly that the robot overshoots it. In this case, the robot does what it normally does when neither sensor is over the line: it curves to the right (and usually re-acquires the line). This problem can be minimized by making the robot move more slowly or by defining a line that doesn’t turn so abruptly.
This diagram does nothing to force the robot to traverse the line in any particular direction and it is possible for the robot to get confused and turn around and move in the other direction along the line.
552
