Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Embedded Robotics (Thomas Braunl, 2 ed, 2006)

.pdf
Скачиваний:
251
Добавлен:
12.08.2013
Размер:
5.37 Mб
Скачать

12 Autonomous Vessels and Underwater Vehicles

The starboard and port motors provide both forward and reverse movement while the stern and bow motors provide depth control in both downward and upward directions. Roll is passively controlled by the vehicle’s innate righting moment (Figure 12.5). The top hull contains mostly air besides light electronics equipment, the bottom hull contains heavy batteries. Therefore mainly a buoyancy force pulls the top cylinder up and gravity pulls the bottom cylinder down. If for whatever reason, the AUV rolls as in Figure 12.5, right, these two forces ensure that the AUV will right itself.

Overall, this provides the vehicle with 4DOF that can be actively controlled. These 4DOF provide an ample range of motion suited to accomplishing a wide range of tasks.

Buoyancy

Gravity

Figure 12.5: Self-righting moment of AUV

Controllers The control system of the Mako is separated into two controllers; an EyeBot microcontroller and a mini-PC. The EyeBot’s purpose is controlling the AUV’s movement through its four thrusters and its sensors. It can run a completely autonomous mission without the secondary controller. The mini PC is a Cyrix 233MHz processor, 32Mb of RAM and a 5GB hard drive, running Linux. Its sole function is to provide processing power for the computationally intensive vision system.

Motor controllers designed and built specifically for the thrusters provide both speed and direction control. Each motor controller interfaces with the EyeBot controller via two servo ports. Due to the high current used by the thrusters, each motor controller produces a large amount of heat. To keep the temperature inside the hull from rising too high and damaging electronic components, a heat sink attached to the motor controller circuit on the outer hull was devised. Hence, the water continuously cools the heat sink and allows the temperature inside the hull to remain at an acceptable level.

Sensors The sonar/navigation system utilizes an array of Navman Depth2100 echo sounders, operating at 200 kHz. One of these sensors is facing down and thereby providing an effective depth sensor (assuming the pool depth is known), while the other three sensors are used as distance sensors pointing forward, left, and right. An auxiliary control board, based on a PIC controller, has been designed to multiplex the four sonars and connect to the EyeBot [Alfirevich 2005].

166

AUV Design USAL

Figure 12.6: Mako in operation

A low-cost Vector 2X digital magnetic compass module provides for yaw or heading control. A simple dual water detector circuit connected to ana- logue-to-digital converter (ADC) channels on the EyeBot controller is used to detect a possible hull breach. Two probes run along the bottom of each hull, which allows for the location (upper or lower hull) of the leak to be known. The EyeBot periodically monitors whether or not the hull integrity of the vehicle has been compromised, and if so immediately surfaces the vehicle. Another ADC input of the EyeBot is used for a power monitor that will ensure that the system voltage remains at an acceptable level. Figure 12.6 shows the Mako in operation.

12.4 AUV Design USAL

The USAL AUV uses a commercial ROV as a basis, which was heavily modified and extended (Figure 12.7). All original electronics were taken out and replaced by an EyeBot controller (Figure 12.8). The hull was split and extended by a trolling motor for active diving, which allows the AUV to hover, while the original ROV had to use active rudder control during a forward motion for diving, [Gerl 2006], [Drtil 2006]. Figure 12.8 shows USAL’s complete electronics subsystem.

For simplicity and cost reasons, we decided to trial infrared PSD sensors (see Section 2.6) for the USAL instead of the echo sounders used on the Mako. Since the front part of the hull was made out of clear perspex, we were able to place the PSD sensors inside the AUV hull, so we did not have to worry about waterproofing sensors and cabling. Figure 12.9 shows the results of measurements conducted in [Drtil 2006], using this sensor setup in air (through the hull), and in different grades of water quality. Assuming good water quality, as can be expected in a swimming pool, the sensor setup returns reliable results up to a distance of about 1.1 m, which is sufficient for using it as a collision avoidance sensor, but too short for using it as a navigation aid in a large pool.

167

12 Autonomous Vessels and Underwater Vehicles

Figure 12.7: Autonomous submarine USAL

Figure 12.8: USAL controller and sensor subsystem

Figure 12.9: Underwater PSD measurement

168

AUV Design USAL

The USAL system overview is shown in Figure 12.10. Numerous sensors are connected to the EyeBot on-board controller. These include a digital camera, four analog PSD infrared distance sensors, a digital compass, a three-axes solid state accelerometer and a depth pressure sensor. The Bluetooth wireless communication system can only be used when the AUV has surfaced or is diving close to the surface. The energy control subsystem contains voltage regulators and level converters, additional voltage and leakage sensors, as well as motor drivers for the stern main driving motor, the rudder servo, the diving trolling motor, and the bow thruster pump.

Figure 12.10: USAL system overview

Figure 12.11 shows the arrangement of the three thrusters and the stern rudder, together with a typical turning movement of the USAL.

Figure 12.11: USAL thrusters and typical rudder turning maneuver

169

12 Autonomous Vessels and Underwater Vehicles

12.5 References

ALFIREVICH, E. Depth and Position Sensing for an Autonomous Underwater Vehicle, B.E. Honours Thesis, The Univ. of Western Australia, Electrical and Computer Eng., supervised by T. Bräunl, 2005

AUVSI, AUVSI and ONR's 9th International Autonomous Underwater Vehicle Competition, Association for Unmanned Vehicle Systems Internation-

al, http://www.auvsi.org/competitions/water.cfm, 2006

BRÄUNL, T., BOEING, A., GONZALES, L., KOESTLER, A., NGUYEN, M., PETITT,

J. The Autonomous Underwater Vehicle Initiative – Project Mako, 2004 IEEE Conference on Robotics, Automation, and Mechatronics (IEEE-RAM), Dec. 2004, Singapore, pp. 446-451 (6)

DRTIL, M. Electronics and Sensor Design of an Autonomous Underwater Vehicle, Diploma Thesis, The Univ. of Western Australia and FH Koblenz, Electrical and Computer Eng., supervised by T. Bräunl, 2006

GERL, B. Development of an Autonomous Underwater Vehicle in an Interdisciplinary Context, Diploma Thesis, The Univ. of Western Australia and Technical Univ. München, Electrical and Computer Eng., supervised by T. Bräunl, 2006

GONZALEZ, L. Design, Modelling and Control of an Autonomous Underwater Vehicle, B.E. Honours Thesis, The Univ. of Western Australia, Electrical and Computer Eng., supervised by T. Bräunl, 2004

170

SIMULATION SYSTEMS 13

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . imulation is an essential part of robotics, whether for stationary manip-

Sulators, mobile robots, or for complete manufacturing processes in factory automation. We have developed a number of robot simulation sys-

tems over the years, two of which will be presented in this Chapter.

EyeSim is a simulation system for multiple interacting mobile robots. Environment modeling is 2½D, while 3D sceneries are generated with synthetic images for each robot’s camera view of the scene. Robot application programs are compiled into dynamic link libraries that are loaded by the simulator. The system integrates the robot console with text and color graphics, sensors, and actuators at the level of a velocity/position control (vZ) driving interface.

SubSim is a simulation system for autonomous underwater vehicles (AUVs) and contains a complete 3D physics simulation library for rigid body systems and water dynamics. It conducts simulation at a much lower level than the driving simulator EyeSim. SubSim requires a significantly higher computational cost, but is able to simulate any user-defined AUV. This means that motor number and positions, as well as sensor number and positions can be freely chosen in an XML description file, and the physics simulation engine will calculate the correct resulting AUV motion.

Both simulation systems are freely available over the Internet and can be downloaded from:

http://robotics.ee.uwa.edu.au/eyebot/ftp/ (EyeSim) http://robotics.ee.uwa.edu.au/auv/ftp/ (SubSim)

13.1 Mobile Robot Simulation

Simulators can be on different levels

Quite a number of mobile robot simulators have been developed in recent years, many of them being available as freeware or shareware. The level of simulation differs considerably between simulators. Some of them operate at a very high level and let the user specify robot behaviors or plans. Others operate at a very low level and can be used to examine the exact path trajectory driven by a robot.

171171

13 Simulation Systems

We developed the first mobile robot simulation system employing synthetic vision as a feedback sensor in 1996, published in [Bräunl, Stolz 1997]. This system was limited to a single robot with on-board vision system and required the use of special rendering hardware.

Both simulation systems presented below have the capability of using generated synthetic images as feedback for robot control programs. This is a significant achievement, since it allows the simulation of high-level robot application programs including image processing. While EyeSim is a multi-robot simulation system, SubSim is currently limited to a single AUV.

Most other mobile robot simulation systems, for example Saphira [Konolige 2001] or 3D7 [Trieb 1996], are limited to simpler sensors such as sonar, infrared, or laser sensors, and cannot deal with vision systems. [Matsumoto et al. 1999] implemented a vision simulation system very similar to our earlier system [Bräunl, Stolz 1997]. The Webots simulation environment [Wang, Tan, Prahlad 2000] models among others the Khepera mobile robot, but has only limited vision capabilities. An interesting outlook for vision simulation in motion planning for virtual humans is given by [Kuffner, Latombe 1999].

13.2 EyeSim Simulation System

All EyeBot programs run on EyeSim

The goal of the EyeSim simulator was to develop a tool which allows the construction of simulated robot environments and the testing of mobile robot programs before they are used on real robots. A simulation environment like this is especially useful for the programming techniques of neural networks and genetic algorithms. These have to gather large amounts of training data, which is sometimes difficult to obtain from actual vehicle runs. Potentially harmful collisions, for example due to untrained neural networks or errors, are not a concern in the simulation environment. Also, the simulation can be executed in a “perfect” environment with no sensor or actuator errors – or with certain error levels for individual sensors. This allows us to thoroughly test a robot application program’s robustness in a near real-word scenario [Bräunl, Koestler, Waggershauser 2005], [Koestler, Bräunl 2004].

The simulation library has the same interface as the RoBIOS library (see Appendix B.5). This allows the creation of robot application programs for either real robots or simulation, simply by re-compiling. No changes at all are required in the application program when moving between the real robot and simulation or vice versa.

The technique we used for implementing this simulation differs from most existing robot simulation systems, which run the simulation as a separate program or process that communicates with the application by some message passing scheme. Instead, we implemented the whole simulator as a collection of system library functions which replace the sensor/actuator functions called from a robot application program. The simulator has an independent main-pro- gram, while application programs are compiled into dynamic link-libraries and are linked at run-time.

172

EyeSim Simulation System

Real World

link at compile-time

robot

interface

 

 

applic.

 

RoBIOS

program

 

 

library

(user)

 

 

stubs

 

 

 

 

Simulationlink at run-time

robot

interface

 

 

applic.

 

EyeSim

program

 

 

 

(user)

 

 

 

 

 

 

 

EyeBot

load at run-time parameter file

environment file

Multi-Robot Simulation

 

 

 

link at run-time

create threads at run-time

 

robot

interface

 

 

 

 

 

 

 

 

 

 

 

 

robot

 

 

 

 

 

 

 

 

 

 

 

applic.

 

interface

EyeSim

 

 

 

 

 

 

 

robot

 

 

 

 

 

 

 

 

parameter file

 

program

 

 

 

 

 

 

 

 

 

applic.

 

 

 

interface

 

 

 

 

 

 

 

 

(user)program

 

 

 

 

 

 

 

 

 

 

 

 

applic.

 

 

 

 

 

 

 

 

 

 

 

(user)program

 

 

 

 

 

 

 

 

 

 

 

 

(user)

 

 

 

 

 

 

 

 

 

environment file

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 13.1: System concept

As shown in Figure 13.1 (top), a robot application program is compiled and linked to the RoBIOS library, in order to be executed on a real robot system. The application program makes system calls to individual RoBIOS library functions, for example for activating driving motors or reading sensor input. In the simulation scenario, shown in Figure 13.1 (middle), the compiled application library is now linked to the EyeSim simulator instead. The library part of the simulator implements exactly the same functions, but now activates the screen displays of robot console and robot driving environment.

In case of a multi-robot simulation (Figure 13.1 bottom), the compilation and linking process is identical to the single-robot simulation. However, at run-time, individual threads are created to simulate each of the robots concurrently. Each thread executes the robot application program individually on local data, but all threads share the common environment through which they interact.

The EyeSim user interface is split into two parts, a robot console per simulated robot and a common driving environment (Figure 13.2). The robot console models the EyeBot controller, which comprises a display and input buttons as a robot user interface. It allows direct interaction with the robot by pressing buttons and printing status messages, data values, or graphics on the screen. In the simulation environment, each robot has an individual console, while they all share the driving environment. This is displayed as a 3D view of the robot driving area. The simulator has been implemented using the OpenGL version Coin3D [Coin3D 2006], which allows panning, rotating, and zooming in 3D with familiar controls. Robot models may be supplied as Milkshape MS3D files [Milkshape 2006]. This allows the use of arbitrary 3D models of a

173

13 Simulation Systems

Figure 13.2: EyeSim interface

robot or even different models for different robots in the same simulation. However, these graphics object descriptions are for display purposes only. The geometric simulation model of a robot is always a cylinder.

Simulation execution can be halted via a “Pause” button and robots can be relocated or turned by a menu function. All parameter settings, including error levels, can be set via menu selections. All active robots are listed at the right of the robot environment window. Robots are assigned unique id-numbers and can run different programs.

Sensor–actuator Each robot of the EyeBot family is typically equipped with a number of modeling actuators:

DC driving motors with differential steering

Camera panning motor

Object manipulation motor (“kicker”)

and sensors:

Shaft encoders

Tactile bumpers

Infrared proximity sensors

Infrared distance measurement sensors

Digital grayscale or color camera

The real robot’s RoBIOS operating system provides library routines for all of these sensor types. For the EyeSim simulator, we selected a high-level subset to work with.

174

EyeSim Simulation System

Driving interfaces

Simulation of tactile sensors

Simulation of infrared sensors

Synthetic camera images

Error models

Identical to the RoBIOS operating system, two driving interfaces at different abstraction levels have been implemented:

High-level linear and rotational velocity interface (vZ)

This interface provides simple driving commands for user programs without the need for specifying individual motor speeds. In addition, this simplifies the simulation process and speeds up execution.

Low-level motor interface

EyeSim does not include a full physics engine, so it is not possible to place motors at arbitrary positions on a robot. However, the following three main drive mechanisms have been implemented and can be specified in the robot’s parameter file. Direct motor commands can then be given during the simulation.

Differential drive

Ackermann drive

Mecanum wheel drive

All robots’ positions and orientations are updated by a periodic process according to their last driving commands and respective current velocities.

Tactile sensors are simulated by computing intersections between the robot (modeled as a cylinder) and any obstacle in the environment (modeled as line segments) or another robot. Bump sensors can be configured as a certain sector range around the robot. That way several bumpers can be used to determine the contact position. The VWStalled function of the driving interface can also be used to detect a collision, causing the robot’s wheels to stall.

The robot uses two different kinds of infrared sensors. One is a binary sensor (Proxy) which is activated if the distance to an obstacle is below a certain threshold, the other is a position sensitive device (PSD), which returns the distance value to the nearest obstacle. Sensors can be freely positioned and orientated around a robot as specified in the robot parameter file. This allows testing and comparing the performance of different sensor placements. For the simulation process, the distance between each infrared sensor at its current position and orientation toward the nearest obstacle is determined.

The simulation system generates artificially rendered camera images from each robot’s point of view. All robot and environment data is re-modeled in the object-oriented format required by the OpenInventor library and passed to the image generator of the Coin3D library [Coin3D 2006]. This allows testing, debugging, and optimizing programs for groups of robots including vision. EyeSim allows the generation of individual scene views for each of the simulated robots, while each robot receives its own camera image (Figure 13.3) and can use it for subsequent image processing tasks as in [Leclercq, Bräunl, 2001].

The simulator includes different error models, which allow to either run a simulation in a “perfect world” or to set an error rate in actuators, sensors, or communication for a more realistic simulation. This can be used for testing a robot application program’s robustness against sensor noise. The error models

175