Задани на лабораторные работы. ПРК / Professional Microsoft Robotics Developer Studio
.pdf
www.it-ebooks.info
Chapter 7: Using Orchestration Services to Build a Simulation Scenario
private void ValidateFrameHandler(webcam.QueryFrameResponse cameraFrame)
{
try
{
if (cameraFrame.Frame != null)
{
DateTime begin = DateTime.Now; double msFrame =
begin.Subtract(cameraFrame.TimeStamp).TotalMilliseconds;
// Ignore old images! if (msFrame < 1000.0)
{
_imageProcessResult = ProcessImage( cameraFrame.Size.Width, cameraFrame.Size.Height, cameraFrame.Frame);
WinFormsServicePort.FormInvoke(
delegate()
{
_magellanUI.SetCameraImage( cameraFrame.Frame, _imageProcessResult);
}
);
}
}
}
catch (Exception ex)
{
LogError(ex);
}
}
Add an ImageProcessResult member variable to the SimMagellan service and add the definition for the ImageProcessResult class:
// latest image processing result ImageProcessResult _imageProcessResult = null; [DataContract]
public class ImageProcessResult
{
[DataMember] public int XMean;
[DataMember]
public float RightFromCenter;
[DataMember] public int YMean;
(continued)
363
www.it-ebooks.info
Part II: Simulations
(continued)
[DataMember]
public float DownFromCenter;
[DataMember] public int Area;
[DataMember]
public DateTime TimeStamp;
[DataMember]
public int AreaThreshold = 50*50;
}
The definition of the SetCameraImage method on the SimMagellanUI class is fairly simple. It creates an empty bitmap and attaches it to the PictureBox control on the form if one doesn’t already exist. It locks the bitmap and then copies the bits from the camera frame. After unlocking the bitmap, it draws a rectangle around the area identified by the ImageProcessResult if the area of the result is large enough. It also draws a small crosshair in the center of the area. The presence or absence of the white rectangle is a good way to know when the vision algorithm detects an object:
public void SetCameraImage(byte[] frame, ImageProcessResult result)
{
if (_cameraImage.Image == null) _cameraImage.Image = new Bitmap(
320, 240, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
Bitmap bmp = (Bitmap)_cameraImage.Image; System.Drawing.Imaging.BitmapData bmd = bmp.LockBits(
new Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, bmp.PixelFormat);
System.Runtime.InteropServices.Marshal.Copy(
frame, 0, bmd.Scan0, frame.Length);
bmp.UnlockBits(bmd);
if (result.Area > result.AreaThreshold)
{
int size = (int)Math.Sqrt(result.Area); int offset = size / 2;
Graphics grfx = Graphics.FromImage(bmp); grfx.DrawRectangle(
Pens.White, result.XMean - offset,
result.YMean - offset, size, size); grfx.DrawLine(
Pens.White, result.XMean - 5,
result.YMean, result.XMean + 5, result.YMean); grfx.DrawLine(
Pens.White,
result.XMean,
364
www.it-ebooks.info
Chapter 7: Using Orchestration Services to Build a Simulation Scenario
result.YMean - 5, result.XMean, result.YMean + 5);
}
_cameraImage.Invalidate();
}
The last method that should be explained is the ProcessImage function. Many vision algorithms do sophisticated edge detection or gradient processing. This simple algorithm is actually adapted from the vision algorithm shipped as part of the MRDS Sumo Competition package. It simply scans through the pixels and keeps track of how many pixels meet some criteria. It also keeps track of the average position of those pixels in the image:
private ImageProcessResult ProcessImage( int width, int height, byte[] pixels)
{
if (pixels == null || width < 1 || height < 1 || pixels.Length < 1) return null;
int offset = 0;
float threshold = 2.5f; int xMean = 0;
int yMean = 0; int area = 0;
// only process every fourth pixel for (int y = 0; y < height; y += 2)
{
offset = y * width * 3;
for (int x = 0; x < width; x += 2)
{
int r, g, b;
b = pixels[offset++]; g = pixels[offset++]; r = pixels[offset++];
float compare = b * threshold; if((g > compare) && (r > compare))
{
//color is yellowish xMean += x;
yMean += y; area++;
//debug coloring pixels[offset - 3] = 255; pixels[offset - 2] = 0; pixels[offset - 1] = 255;
} |
|
offset += 3; |
// skip a pixel |
} |
|
} |
|
if (area > 0) |
|
(continued)
365
www.it-ebooks.info
Part II: Simulations
(continued)
{
xMean = xMean / area; yMean = yMean / area; area *= 4;
}
ImageProcessResult result = new ImageProcessResult();
result.XMean = xMean; result.YMean = yMean; result.Area = area; result.RightFromCenter =
(float)(xMean - (width / 2)) / (float)width; result.DownFromCenter =
(float)(yMean - (height / 2)) / (float)height;
return result;
}
In this case, you know that the cones are yellow and that there are few, if any, other yellow objects in the scene. Therefore, you look for pixels that have a red and green component much larger than their blue component. As an additional debug aid, the code colors the identified pixels magenta. This is a great debugging aid because it enables you to see exactly which pixels are being considered in the result.
To aid in performance, you need only analyze every other pixel in the image in width and height. The area is incremented by four for every pixel that is identified because each pixel represents four pixels in the image.
This is a good time to compile your service again and observe the behavior of the vision algorithm. Run your mySimMagellan manifest. When all the services have started, click the Reset button on the SimMagellan user interface form. The form should begin displaying the image from the Corobot camera updated five times per second. You can start the Simulation Editor by pressing F5 in the simulation window. Select the Corobot entity in the upper-left pane and select its position property in the lower-left pane. Drag the Corobot with the left mouse button while holding down the Ctrl key. Move the Corobot so that it is facing one of the street cones and verify that the image processing algorithm correctly identifies the object, and colors every other yellow pixel magenta, as in Figure 7-6.
Figure 7-6
366
www.it-ebooks.info
Chapter 7: Using Orchestration Services to Build a Simulation Scenario
The Wander Behavior
Most of the time, the robot navigates and discovers the environment using the Wander state. The behavior in this state is intentionally left a little “fuzzy” and random. In part, this acknowledges that in the real world, positional data from GPS sensors has varying and limited accuracy. It is also an intentional way to deal with cyclical behavior. It is common to see a robot with a very specific objective get into a cycle in which it heads straight for its objective only to hit an obstacle, take a quick detour, and then head again straight for its objective and hit the same obstacle again. Without some random behavior and the ability to switch objectives, the robot may never complete the course. Once again, though, be aware that none of these behaviors is optimal and there is a lot of room for improvement and experimentation.
This is the code for the Wander case in the BehaviorLoop switch statement:
case ModeType.Wander:
{
// check to see if we have identified a cone if (_imageProcessResult != null)
{
if (_imageProcessResult.Area > _imageProcessResult.AreaThreshold)
{
// we have a cone in the crosshairs _state.CurrentMode = ModeType.Approach; continue;
}
}
quadDrive.DriveDifferentialFourWheelState quadDriveState = null; yield return Arbiter.Choice(_quadDrivePort.Get(),
delegate(quadDrive.DriveDifferentialFourWheelState state)
{
quadDriveState = state;
},
delegate(Fault f) { }
);
if (quadDriveState == null) continue;
double distance; double degrees;
float currentHappiness = GetHappiness(quadDriveState.Position); if (currentHappiness > _previousHappiness)
{
// keep going in the same direction distance = 0.25;
degrees = 0;
}
else
{
// back up and try again if(!driveCommandFailed)
(continued)
367
www.it-ebooks.info
Part II: Simulations
(continued)
yield return Arbiter.Choice( _drivePort.DriveDistance(-0.25, _driveSpeed), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { driveCommandFailed = true; }
);
distance = 0.25;
// choose an angle between -45 and -90.
degrees = (float)_random.NextDouble() * -45f - 45f; // restore happiness level of previous position _previousHappiness = _previousPreviousHappiness;
}
if (degrees != 0)
{
if (!driveCommandFailed)
yield return Arbiter.Choice( _drivePort.RotateDegrees(degrees, _rotateSpeed), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { driveCommandFailed = true; }
);
}
if (!driveCommandFailed)
yield return Arbiter.Choice( _drivePort.DriveDistance(distance, _driveSpeed), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { driveCommandFailed = true; }
);
_previousPreviousHappiness = _previousHappiness; _previousHappiness = currentHappiness;
break;
}
The first thing the robot does is check whether the image processing method has found a cone. If it has, it changes to the Approach state. If not, it continues wandering for another step.
Next, find out where the robot is by asking the SimulatedQuadDifferentialDrive for the current position. In the real world, this would be a GPS service and the coordinates returned would be latitude and longitude. Then, you call GetHappiness to find out how happy the robot is in this position. Being near a cone makes the robot very happy.
If the robot’s happiness in this location is greater than the happiness in its previous location, it decides to continue moving in this direction for another quarter meter. Otherwise, it backs up to the previous position and picks a new direction. The new direction is a random angle between 45 and 90 degrees.
Notice that the code checks whether any previous drive command failed while processing this state. If it did, that could mean that the IR Distance Sensor service sent a notification that the robot is closer than one meter to an obstacle, terminating the current command by sending an AllStop message. In this case, you want to abort the current state processing and get back to the top of the switch statement because the state has probably changed to AvoidCollision.
368
www.it-ebooks.info
Chapter 7: Using Orchestration Services to Build a Simulation Scenario
As mentioned earlier, the GetHappiness method is a way for the robot to be motivated to move in the right direction. When it is moving toward a cone, happiness increases. Moving away from a cone decreases its happiness and causes it to rethink its life decisions. This is a fuzzy way of encouraging the robot to go in the right direction. The code is fairly simple:
float _previousHappiness = 0;
float _previousPreviousHappiness = 0; Random _random = new Random();
float GetHappiness(Vector3 pos)
{
float happiness = 0;
for(int i=0; i<_waypoints.Count; i++)
{
if (_waypoints[i].Visited) continue;
float dx = (pos.X - _waypoints[i].Location.X); float dz = (pos.Z - _waypoints[i].Location.Z);
float distance = (float)Math.Sqrt(dx * dx + dz * dz);
float proximity = 50f - Math.Min(distance, 50f); if(proximity > 40f)
proximity = proximity * proximity;
happiness += proximity;
}
return happiness;
}
The code only considers waypoints that have not been visited yet. As soon as a waypoint is visited, it no longer makes the robot happy and it moves on to other pursuits, not unlike some people. The happiness function is simply the sum of the proximity to each nonvisited cone, squared. If a cone is more than
50 meters away, then it doesn’t contribute to the happiness function.
The Approach State
When the vision processing algorithm detects that the robot is close enough to a cone to exceed the area threshold, the robot enters the Approach state, which has the following implementation:
case ModeType.Approach:
{
float IRDistance = -1f;
// get the IR sensor reading irsensor.Get tmp = new irsensor.Get(); _irSensorPort.Post(tmp);
yield return Arbiter.Choice(tmp.ResponsePort, delegate(irsensor.AnalogSensorState state)
{
if (state.NormalizedMeasurement < 1f)
(continued)
369
www.it-ebooks.info
Part II: Simulations
(continued)
IRDistance = (float)state.RawMeasurement;
},
delegate(Fault f) { }
);
if (IRDistance >= 0)
{
// rely on the IR sensor for the final approach _state.CurrentMode = ModeType.FinalApproach; break;
}
if ((_imageProcessResult == null) || (_imageProcessResult.Area < _imageProcessResult.AreaThreshold))
{
// lost it, go back to wander mode _state.CurrentMode = ModeType.Wander; continue;
}
float angle = _imageProcessResult.RightFromCenter * -2f * 5f; float distance = 0.25f;
if (!driveCommandFailed)
yield return Arbiter.Choice( _drivePort.RotateDegrees(angle, _rotateSpeed), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { driveCommandFailed = true; }
);
if (!driveCommandFailed)
yield return Arbiter.Choice( _drivePort.DriveDistance(distance, _driveSpeed), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { driveCommandFailed = true; }
);
break;
}
In the Approach state, the code first checks whether the robot is within one meter of an obstacle, assuming that the obstacle is a cone. If so, the robot moves to the FinalApproach state. If the robot no longer has the cone in its sights, it goes back to Wander mode. Otherwise, it needs to get closer.
You use the _imageProcessResult.RightFromCenter value to determine how far right or left the robot needs to turn in order to center the cone in the camera frame. The numbers used here were arrived at through experimentation. The code then adjusts the angle and moves forward a quarter meter. Eventually, the robot gets close enough to the cone to enter FinalApproach mode or it loses sight of the cone and goes back into Wander mode until the cone is sighted again.
370
www.it-ebooks.info
Chapter 7: Using Orchestration Services to Build a Simulation Scenario
The Final Approach State
The FinalApproach state has the following implementation:
case ModeType.FinalApproach:
{
//rely on the IR sensor for the remainder of the cone approach float IRDistance = -1f;
//get the IR sensor reading
irsensor.Get tmp = new irsensor.Get(); _irSensorPort.Post(tmp);
yield return Arbiter.Choice(tmp.ResponsePort, delegate(irsensor.AnalogSensorState state)
{
if (state.NormalizedMeasurement < 1f) IRDistance = (float)state.RawMeasurement;
},
delegate(Fault f) { }
);
if (IRDistance < 0)
{
// go back to the visual approach _state.CurrentMode = ModeType.Approach; break;
}
if (!driveCommandFailed)
yield return Arbiter.Choice( _drivePort.DriveDistance(IRDistance, 0.2), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { driveCommandFailed = true; }
);
// mark the cone as touched quadDrive.DriveDifferentialFourWheelState quadDriveState = null; yield return Arbiter.Choice(_quadDrivePort.Get(),
delegate(quadDrive.DriveDifferentialFourWheelState state)
{
quadDriveState = state;
},
delegate(Fault f) { }
);
if (quadDriveState != null)
{
for (int i = 0; i < _waypoints.Count; i++)
{
float dx = (quadDriveState.Position.X - _waypoints[i].Location.X); float dz = (quadDriveState.Position.Z - _waypoints[i].Location.Z);
float distance = (float)Math.Sqrt(dx * dx + dz * dz); if (distance < 1f)
(continued)
371
www.it-ebooks.info
Part II: Simulations
(continued)
_waypoints[i].Visited = true;
}
}
_state.CurrentMode = ModeType.BackAway;
break;
}
In the FinalApproach state, you rely only on the IR Distance Sensor. The robot makes a reading of the distance to the cone and then moves exactly that far forward. In the real world, where IR distance readings are not always exact, it would probably be necessary to make smaller steps toward the cone. When the robot moves close enough to the cone to touch it, you ask the drive for the robot’s position and then determine which cone it was that the robot touched, marking it as touched. Then the robot goes into the BackAway state that moves it away from the cone.
The Back Away State
The BackAway state simply moves the robot away from the cone it just touched and points it in a different direction:
case ModeType.BackAway:
{
// back away and turn around if (!driveCommandFailed)
yield return Arbiter.Choice( _drivePort.DriveDistance(-1, _driveSpeed), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { driveCommandFailed = true; }
);
if (!driveCommandFailed)
yield return Arbiter.Choice( _drivePort.RotateDegrees(45, _rotateSpeed), delegate(DefaultUpdateResponseType response) { }, delegate(Fault f) { driveCommandFailed = true; }
);
_state.CurrentMode = ModeType.Finished; for (int i = 0; i < _waypoints.Count; i++)
{
if (!_waypoints[i].Visited)
{
_state.CurrentMode = ModeType.Wander; _previousHappiness = 0; _previousPreviousHappiness = 0; break;
}
}
if (_state.CurrentMode == ModeType.Finished)
372
