Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Задани на лабораторные работы. ПРК / Professional Microsoft Robotics Developer Studio

.pdf
Скачиваний:
126
Добавлен:
20.04.2015
Размер:
16.82 Mб
Скачать

www.it-ebooks.info

Chapter 15: Using a Robotic Arm

MotionRecorder.cs does all the work of controlling the arm and is explained further in this section. It specifies the Lynx6Arm service as a partner. Notice that it uses the generic arm contract and creates a port called _lynxPort:

[Partner(“Lynx6Arm”, Contract = arm.Contract.Identifier,

CreationPolicy = PartnerCreationPolicy.UseExisting, Optional = false)] private arm.ArticulatedArmOperations _lynxPort =

new arm.ArticulatedArmOperations();

A lot of the code in MotionRecorderUI.cs is for validating input data, reading and writing files, and so on. This has nothing to do with MRDS, but unfortunately it has to be included.

MotionRecorderUI.cs uses the standard process for a WinForm to communicate with the main service. (If you are unsure about how this works, refer back to Chapter 4. Only a very brief explanation is provided here.) It uses a set of classes that are defined in a similar way to operations for a service, except that it is not a full-service interface. Instead, it posts all of its requests using a single operation class and passes across an object that represents the function to be performed. This helps to reduce the number of operations in the main service’s port set so that the limit of 20 operations is not exceeded.

For example, when the Form is first created it sends back a handle to itself to the main service. This lets the main service know that the Form is running so that it can enumerate the joints in the arm. In the Form, the code is as follows:

//Send a message to say that the init is now complete

//This also passes back a handle to the Form

_fromWinformPort.Post(new FromWinformMsg(FromWinformMsg.MsgEnum.Loaded, null, this));

Back in the main service, the FromWinformMsg handler processes the request and executes the appropriate code based on the command type, which in this case is Loaded:

// Process messages from the UI Form

IEnumerator<ITask> OnWinformMessageHandler(FromWinformMsg msg)

{

switch (msg.Command)

{

case FromWinformMsg.MsgEnum.Loaded: // Windows Form is ready to go

_motionRecorderUI = (MotionRecorderUI)msg.Object; SpawnIterator(EnumerateJoints);

break;

EnumerateJoints issues a Get request to the Lynx6Arm service to get its state. Notice that it does this in a loop because the arm service might not be up and running properly, in which case there will be a Fault. To ensure that it does not loop forever, there is a counter:

IEnumerator<ITask> EnumerateJoints()

{

int counter = 0; arm.ArticulatedArmState armState = null;

// Get the arm state because it contains the list of joints

(continued)

673

www.it-ebooks.info

Part IV: Robotics Hardware

(continued)

//Wait until we get a response from the arm service

//If it is not ready, then null will be returned

//However, we don’t want to do this forever!

while (armState == null)

{

//Get the arm state which contains a list of all the joints yield return Arbiter.Choice(

_lynxPort.Get(new GetRequestType()), delegate(arm.ArticulatedArmState state) { armState = state; }, delegate(Fault f) { LogError(f); }

);

//This is not good error handling!!!

if (armState == null)

{

counter++;

if (counter > 20)

{

Console.WriteLine(“***** Could not get arm state. Is the arm turned on? *****”);

//throw (new Exception(“Could not get arm state!”)); yield break;

}

// Wait a little while before trying again ...

yield return Arbiter.Receive(false, TimeoutPort(500), delegate(DateTime now) { });

}

}

The armState contains a list of the joints. A list of the joint names is created and then the Form is updated using FormInvoke, which executes a delegate in the context of the Form. The Form contains the function ReplaceArticulatedArmJointList, which updates the list of joint names inside the Form so that it can also refer to them by name:

// Create the new list of names

_jointNames = new string[armState.Joints.Count]; for (int i = 0; i < armState.Joints.Count; i++)

{

_jointNames[i] = armState.Joints[i].State.Name;

}

//Now update the names in the Form so that it can do

//whatever it needs to with them as well WinFormsServicePort.FormInvoke(delegate()

{

_motionRecorderUI.ReplaceArticulatedArmJointList(_jointNames);

});

In addition to setting the joint names, the initial joint angles are also set, but the code is not shown here. That completes the startup process.

674

www.it-ebooks.info

Chapter 15: Using a Robotic Arm

Using the Motion Recorder

The Motion Recorder does not automatically record anything. You must click the Record button (near the bottom of the window) to record the current command. You can play around using the slider controls, and as you move them the arm will follow (and the angle values in the textboxes will update). When you are happy with the position of the arm, you can record it. By doing this repeatedly, you can build up a complete sequence of motions to perform a particular task.

Warning: Do not move the sliders too fast!

The UI contains four basic areas. At the top is the Control Panel, which enables you to execute motion commands. Next are the Arm Pose and Joint Positions panels, which provide values for motions. Finally, there is an area for recording, saving, and replaying commands.

The most important button is the Execute button, which calls the main service to perform either a SetEndEffectorPose if the Use Pose radio button is selected, or a sequence of separate SetJointTargetPose operations (one for each joint) if the Use Joints radio button is selected. These two operations are the crux of the application, so the code for them is discussed here.

As you move the sliders around, the trackbar controls continually fire off requests to the main service to move the corresponding joints. Once the main service has decoded the FromWinformMsg it executes the MoveJoint function using the values passed from the Form:

void MoveJoint(string name, float angle)

{

arm.SetJointTargetPoseRequest req = new arm.SetJointTargetPoseRequest(); req.JointName = name;

AxisAngle aa = new AxisAngle( new Vector3(1, 0, 0),

(float)(Math.PI * ((double)angle / 180)));

req.TargetOrientation = aa;

_lynxPort.SetJointTargetPose(req);

}

Notice that the SetJointTargetPoseRequest requires the name of the joint and the target angle, expressed as an AxisAngle. By convention, the axis angles are all with respect to the X axis. In addition, the angle from the Form is in degrees, so it has to be converted to radians.

When Use Joints is selected, the Execute button invokes a different method in the main service called MoveMultipleJoints, but this simply calls MoveJoint for each of the joints because the generic arm contract does not have a request to set multiple joints at the same time. The disadvantage of this approach is that the different joint movements are not coordinated.

Conversely, if Use Pose is selected when the Execute button is pressed, it sends a request to run the MoveToPosition function in the main service. This function sends a SetEndEffectorPose request to the Lynx6Arm service, but first it has to create a quaternion for the pose. This is a somewhat painful process that is not helped by the fact that the proxy versions of the classes don’t have the necessary static

675

www.it-ebooks.info

Part IV: Robotics Hardware

methods. After several lines of code, the request is finally sent. Then the wrist rotate angle and the gripper are set independently because SetEndEffectorPose does not do these:

public SuccessFailurePort MoveToPosition( float mx, // x position

float my, // y position float mz, // z position float p, // angle of the grip

float w, // rotation of the grip

float grip, // distance the grip is open float time) // time to complete the movement

{

// Create a request

arm.SetEndEffectorPoseRequest req = new arm.SetEndEffectorPoseRequest(); Pose rp = new Pose();

Vector3 v = new Vector3(mx, my, mz); rp.Position = v;

physicalmodel.Quaternion q = new physicalmodel.Quaternion(); q = physicalmodel.Quaternion.FromAxisAngle(1, 0, 0,

Conversions.DegreesToRadians(p));

Quaternion qp = new Quaternion(q.X, q.Y, q.Z, q.W); rp.Orientation = qp;

req.EndEffectorPose = rp; _lynxPort.SetEndEffectorPose(req);

//Set the positions of the wrist rotate and the gripper as well MoveJoint(_jointNames[(int)lynx.JointNumbers.WristRotate], w); MoveJoint(_jointNames[(int)lynx.JointNumbers.Gripper], grip);

//Now return success

SuccessFailurePort responsePort = new SuccessFailurePort(); responsePort.Post(new SuccessResult());

return responsePort;

}

Note a couple of advantages in using SetEndEffectorPose. You can set precisely where the arm will go by entering values into the Arm Pose panel. You can also specify the tilt angle of the gripper, which is important if you want the robot to pick something up. More importantly, the motion is better coordinated because, as explained earlier, it sends a single command to the SSC-32 to move all the joints.

The disadvantage of using the Arm Pose panel is that it isn’t always easy to pick valid values. Moreover, sometimes the results might be a little surprising. If the arm doesn’t move, look in the MRDS DOS command window for an error message. It is possible that you selected an invalid pose, which causes the arm service to generate a Fault.

When you click the Get Pose button, the code sends a FromWinformMsg request to the main service containing the command GetPose. In the main service, the FromWinformMsg handler decodes the command and calls GetArmHandler. This handler makes a GetEndEffectorPose request to the Lynx6Arm service and then uses FormInvoke to return the results to the Form, where they appear in the textboxes across the top. The slider controls are also updated to match the current positions of the joints. A Copy button beside the pose values enables you to transfer the pose information to the Arm Pose textboxes.

676

www.it-ebooks.info

Chapter 15: Using a Robotic Arm

Alternatively, you can enter values into the textboxes beside the joint sliders. Then when you click Execute (with Use Joints selected), the joint angles will be set.

The Save button provides a quick way to save all the values in the Arm Pose and Joint Position textboxes. You can get them back later using Restore.

The Reset button sets all of the angles to zero, which should be the initial setting when the arm starts up. Park sends the arm to a “parked” position, but it is quite arbitrary. You can change the joint angles for each of these at the top of the main service:

// Joint angles for the Reset and Park commands float[] _resetAngles = { 0, 0, 0, 0, 0, 0 }; float[] _parkAngles = { 0, 20, -50, 30, 0, 0 };

The remaining buttons across the bottom of the window perform the following functions:

Record: Saves the values of either the Arm Pose or Joint Positions, depending on whether Use Pose or Use Joints is selected, into the command history list at the bottom left

Delay: Adds a delay command to the command history list using the value in the Time field

Delete: Removes the currently selected item from the command history list

Copy: Copies the values from the currently selected command to either the Arm Pose or Joint Positions textboxes depending on the type of command that is selected

Play: Replays all of the commands in the list starting from the top

Stop: Stops replaying commands

Load File: Loads a previously saved command history list file

Save File: Saves the command history list to a CSV file

Saved command history files are comma-separated value files. These are plain text and can be edited with Notepad, or loaded into Excel. An example is given in the next section. You might find it quicker to edit a file rather than use the Motion Recorder once you have saved a few key commands. Just be careful not to mess up the format.

Pick and Place

“Pick and place” is a very common operation for robotic arms. They can spend the whole day picking things off a conveyor belt and dropping them into bins—a mindless task that you would probably find exceedingly boring.

The basic principle is to program the arm so that it knows where to go to get an object (pick) and where to go to put it down (place). Even today, many robotic arms are manually programmed by moving them through a sequence of motions and recording the path.

677

www.it-ebooks.info

Part IV: Robotics Hardware

A saved command history file is included in the software package on the book’s website (www.proMRDS

.com and www.wrox.com). It should be in the folder with the Motion Recorder source files and is called take_object.csv. The contents of this file are as follows:

Move,-80,0,-20,25,0,10,0

Delay,-80,0,-20,25,0,10,1000

Move,-80,0,-20,25,0,-25,0

Move,0,0,-20,25,0,-25,0

Move,0,-40,-10,45,0,-25,0

Delay,0,-40,-10,45,0,-25,500

Move,0,-40,-10,45,0,0,0

Move,0,-20,-10,45,0,0,0

Delay,0,-20,-10,45,0,0,1000

Move,0,-40,-10,45,0,0,0

Delay,0,-40,-10,45,0,0,500

Move,0,-40,-10,45,0,-25,0

Move,0,-20,-10,45,0,-25,0

Move,0,0,-20,25,0,-25,0

Move,-80,0,-20,25,0,-25,0

Delay,-80,0,-20,25,0,-25,500

Move,-80,0,-20,25,0,10,0

Notice that the file only contains Move and Delay commands. There is no particular reason for this other than that it was easier to move the sliders than to figure out coordinates. However, it could use Pose commands.

This file executes the following basic steps:

1.

2.

3.

4.

5.

6.

Swing the arm to the left (which should be to where you are sitting) and open the gripper in expectation of being handed something.

Close the gripper. The gripper setting is for a AA battery. If you want to use a different object, you should first practice with the Motion Recorder and set the gripper value accordingly. You don’t want to force the gripper too far past the point where it contacts the object.

Turn the arm back so it faces straight ahead and put the object down.

Raise the arm and pause for effect. (This might be an appropriate point for applause.)

Pick the object up again.

Take it back and drop it where it came from. If you are not ready to catch it, the object — whatever it is — will fall on the ground.

To begin, load the file into the Motion Recorder and play it without using an object. Just sit and watch. Then you can try handing an object to the arm when it extends its hand out to you.

This sequence can definitely be improved, but that isn’t the point. What you have is a program that enables you to easily determine the necessary joint parameters for a sequence of operations. With this knowledge you can write services that replay the commands, or perhaps generate commands automatically based on some formula.

678

www.it-ebooks.info

Chapter 15: Using a Robotic Arm

You need to play and replay your motion sequence repeatedly, looking for flaws. One of the main problems is that the arm might take a shortcut to its destination that takes it too close to an obstacle. Similarly, when picking up or putting down an object, you need to make clear the up and down movements. Don’t simply drag the object off to its new location straight away because that is exactly what might happen — it drags along the ground.

Logjam

When moving objects around, you need to remember where you put them. The arm cannot sense anything in its environment. Another problem is that the arm needs a certain amount of clearance to put down and pick up an object. Placing objects too close together is almost certain to result in some of them getting knocked over. The further the arm is extended, the less leeway there is to “reach over” an object to get to one behind it.

You will find that the Lynx 6 arm tends to wave around (oscillate) when it is extended a long way. If it waves too much, it can knock over other objects. The examples using dominoes in the simulation in Chapter 8 are probably not possible with a real L6.

In addition, remember that the gripper on the arm can only approach an object along a radial line from the base of the arm out to the tip. The wrist rotate servo is no help in turning the end of the arm to face a different direction — it just tips the gripper on its side. You might therefore find that it is easier to place objects in semi-circular patterns, rather than straight lines.

Using an Overhead Camera

MRDS includes support for a webcam. If you start a webcam service and run the Dashboard, you will see your webcam listed. Selecting it brings up a separate window to show the video stream. The source code for this is included in the Dashboard service in Chapter 4.

If you placed a camera above your arm looking down, it should be a simple matter to locate objects in the image for the arm to pick up. (The simulated arm in Chapter 8 includes a camera overlooking the arm, as well as a camera mounted just behind the gripper.) As long as the objects are a significantly different color from any of the colors on the ground, you can use image subtraction to separate objects out from the background. You first take an image of the empty area as a reference. Then, as objects are placed in front of the robot, you subtract the original image from the current image. Anything that is not ground will stand out.

A blob tracker example included with MRDS is another alternative. It can be trained to follow objects of different colors. Regardless of how you locate the objects in the image, you have to calibrate the arm

to the image. The simplest way to do this is probably to position the arm roughly near the four corners of the image and use GetEndEffectorPose to determine the X and Z coordinates (the Y coordinate is the height, which you can’t tell from an overhead image). By interpolating between these points in both directions, you can determine the coordinates of any point in the image.

Note that your camera needs to be mounted directly overhead and looking straight down. If it is off to one side, then perspective will make the interpolation nonlinear. This is left for you as an exercise.

679

www.it-ebooks.info

Part IV: Robotics Hardware

Summary

After completing this chapter successfully, you should have a good grounding in using robotic arms. If you are a student of mechatronics, then you might feel that something is missing, and there is — a heap of complex equations. If you want more details, visit the KUKA Educational Framework website (www.kuka.com).

Robotic arm applications are limited only by your imagination (and, of course, the physical constraints on your arm).

The next chapter shows you how to write services to run on an onboard PC running Windows CE or a PDA using the .NET Compact Framework (CF).

680

www.it-ebooks.info

Autonomous Robots

And now for something completely different . . .

So far you have seen robots controlled directly from your desktop PC. However, it is much more fun to let them roam free. To do this you need to make them autonomous, i.e., give the robots onboard brains.

In writing a book like this it is hard to predict the skills of the audience. Some of you might have worked with MS-DOS in what now seems like a previous life, and others might be young enough to have grown up without having to deal with the horrors of command prompts, Autoexec.bat, Windows for Workgroups, and so on. Unfortunately, in some ways working with onboard computers is taking a step backwards from the comfortable environment of Windows XP or Vista that you have on your desktop PC to the bygone era of DOS.

Some of this chapter, therefore, is about introducing you to what might be new concepts and ways of programming and debugging. The robotics content is not very high, but unless you understand how to work in these environments you will not be able to create autonomous robots.

Regardless of whether you want to build an autonomous robot or not, you might find parts of this chapter useful. For example, it discusses using the .NET Compact Framework (CF) on a personal digital assistant (PDA) to control a robot via Bluetooth.

You should read the entire chapter even if you are using Windows CE because some of the material in the section on PDAs is relevant to Windows CE as well.

PC-Based Robots

It has already been pointed out several times in this book that MRDS runs on Windows. It does not generate native code for a BASIC Stamp (Boe-Bot or Scribbler), a PIC16F877 (Hemisson), an ARM (LEGO NXT), or any other microcontroller. MRDS services can talk to a monitor program running

www.it-ebooks.info

Part IV: Robotics Hardware

on one of these small microcomputers, but you still need Windows. Therefore, somehow you have to work a Windows-based computer into the scenario.

Windows XP-Based PCs

In late 2007, White Box Robotics announced a new robot called the 914 PC-BOT (www.whiteboxrobotics

.com/PCBOTs/index.html). It’s also sold by Heathkit as the HE-RObot (www.heathkit.com/herobot

.html). The PC-BOT, which looks a little like R2D2, has an internal PC with a hard disk, DVD drive, speakers, webcam, and just about everything that you would expect on a PC except for a monitor, mouse, and keyboard (which you can plug in if the robot will just stand still).

The 914 PC-BOT runs Windows XP and comes with its own control software. Unfortunately, it costs many thousands of dollars and is not the type of robot that you are likely to buy if you are a beginner. It certainly qualifies as an autonomous robot, but because it has a full-blown PC embedded in it, there is no real challenge for software development apart from trying to remotely debug your application.

Even remote debugging is relatively easy with the PC-BOT because you can use the WiFi interface to connect to the robot and run a remote desktop connection, which is as good as being logged onto the robot itself. You can install Visual Studio and the full MRDS directly onto the robot’s PC and compile and debug your code onboard — no messy downloads or issues with synchronizing source code.

If you have enough money, you might want to buy a PC-BOT and skip the rest of this chapter.

Despite having demonstrated the PC-BOT with MRDS V1.0 at the launch, at the time of writing White Box was redeveloping their MRDS services, so MRDS support was not available. Check the White Box website for updates (www.whiteboxrobotics.com).

Onboard Laptops

In terms of ease of use, putting a laptop on your robot has to be the simplest solution. Ignoring the obvious question of how you connect to the sensors and actuators, the laptop runs exactly the same software that you would run on your desktop and has a monitor, keyboard, touchpad, and so on. It is no different from using a desktop. (The PC-BOT uses a USB interface called an M3 module to connect to the motors and sensors).

Evolution Robotics released a robot a few years ago called the ER1 that was basically a set of wheels onto which you could strap your laptop. The key component was a USB interface for controlling the wheels. Unfortunately, the ER1 is now only sold in a bundle with some expensive software that has pushed it out of the price range of the hobbyist or beginner roboticist.

The problem with a laptop is that it would squash most of the robots covered in this book — the Boe-Bot and LEGO NXT, for example, simply cannot carry a laptop. Even the Stinger robot used in this chapter might have trouble with a laptop, so this option is ignored.

PDAs

The next step down the food chain as far as computers are concerned is handheld devices, commonly called PDAs (personal digital assistants) or Pocket PCs.

682