
Литература_1 / sys_arch
.pdf
♥ 2008, QNX Software Systems GmbH & Co. KG. Driver module
Driver module
The network driver module is responsible for managing the details of a particular network adaptor (e.g. an NE-2000 compatible Ethernet controller). Each driver is packaged as a shared object and installs into the io-pkt* component.
Loading and unloading a driver
Once io-pkt* is running, you can dynamically load drivers at the command line using the mount command. For example, the following commands start io-pkt-v6-hc and then mount the driver for the Broadcom 57xx chip set adapter:
io-pkt-v6-hc &
mount -T io-pkt devnp-bge.so
All network device drivers are shared objects whose names are of the form devnp-driver.so.
The io-pkt* manager can also load legacy io-net drivers. The names of these drivers start with devn-.
Once the shared object is loaded, io-pkt* will then initialize it. The driver and io-pkt* are then effectively bound together — the driver will call into io-pkt* (for example when packets arrive from the interface) and io-pkt* will call into the driver (for example when packets need to be sent from an application to the interface).
To unload a legacy io-net driver, you can use the umount command. For example:
umount /dev/io-pkt/en0
To unload a new-style driver or a legacy io-net driver, use the ifconfig destroy command:
ifconfig bge0 destroy
For more information on network device drivers, see their individual utility pages (devn-*, devnp-*) in the Utilities Reference.
October 16, 2008 |
Chapter 11 • Networking Architecture 193 |

Chapter 12
Native Networking (Qnet)
In this chapter. . .
QNX Neutrino distributed |
197 |
Name resolution and lookup |
198 |
Redundant Qnet: Quality of Service (QoS) and multiple paths 202 Examples 206
Custom device drivers 207
October 16, 2008 |
Chapter 12 • Native Networking (Qnet) 195 |

♥ 2008, QNX Software Systems GmbH & Co. KG. |
QNX Neutrino distributed |
QNX Neutrino distributed
Earlier in this manual, we described message passing in the context of a single node (see the Interprocess Communication (IPC) chapter). But the true power of QNX Neutrino lies in its ability to take the message-passing paradigm and extend it transparently over a network of microkernels.
This chapter describes QNX Neutrino native networking (via the Qnet protocol). For information on TCP/IP networking, please refer to the next chapter.
At the heart of QNX Neutrino native networking is the Qnet protocol, which is deployed as a network of tightly coupled trusted machines. Qnet lets these machines share their resources efficiently with little overhead. Using Qnet, you can use the standard OS utilities (cp, mv, and so on) to manipulate files anywhere on the Qnet network as if they were on your machine. In addition, the Qnet protocol doesn’t do any authentication of remote requests; files are protected by the normal permissions that apply to users and groups. Besides files, you can also access and start/stop processes, including managers, that reside on any machine on the Qnet network.
The distributed processing power of Qnet lets you do the following tasks efficiently:
•Access your remote filesystem.
•Scale your application with unprecedented ease.
•Write applications using a collection of cooperating processes that communicate transparently with each other using Neutrino message-passing.
•Extend your application easily beyond a single processor or SMP machine to several single-processor machines and distribute your processes among those CPUs.
•Divide your large application into several processes, where each process can perform different functions. These processes coordinate their work using message passing.
•Take advantage of Qnet’s inherent remote procedure call functionality.
Moreover, since Qnet extends Neutrino message passing over the network, other forms of IPC (e.g. signals, message queues, named semaphores) also work over the network.
To understand how network-wide IPC works, consider two processes that wish to communicate with each other: a client process and a server process (in this case, the serial port manager process). In the single-node case, the client simply calls open(), read(), write(), etc. As we’ll see shortly, a high-level POSIX call such as open() actually entails message-passing kernel calls “underneath” (ConnectAttach(), MsgSend(), etc.). But the client doesn’t need to concern itself with those functions; it simply calls open().
fd = open("/dev/ser1",O_RDWR.... |
); /*Open a serial device*/ |
October 16, 2008 |
Chapter 12 • Native Networking (Qnet) 197 |

Name resolution and lookup |
♥ 2008, QNX Software Systems GmbH & Co. KG. |
Now consider the case of a simple network with two machines — one contains the client process, the other contains the server process.
lab1 |
|
|
|
lab2 |
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Client Server
A simple network where the client and server reside on separate machines.
The code required for client-server communication is identical to the code in the single-node case, but with one important exception: the pathname. The pathname will contain a prefix that specifies the node that the service (/dev/ser1) resides on. As we’ll see later, this prefix will be translated into a node descriptor for the lower-level ConnectAttach() kernel call that will take place. Each node in the network is assigned a node descriptor, which serves as the only visible means to determine whether the OS is running as a network or standalone.
For more information on node descriptors, see the Transparent Distributed Processing with Qnet chapter of the Neutrino Programmer’s Guide.
Name resolution and lookup
When you run Qnet, the pathname space of all the nodes in your Qnet network is added to yours. Recall that a pathname is a symbolic name that tells a program where to find a file within the directory hierarchy based at root (/).
The pathname space of remote nodes will appear under the prefix /net (the directory created by the Qnet protocol manager, lsm-qnet.so, by default).
For example, remote node1 would appear as:
/net/node1/dev/socket
/net/node1/dev/ser1
/net/node1/home
/net/node1/bin
....
So with Qnet running, you can now open pathnames (files or managers) on other remote Qnet nodes, just as you open files locally on your own node. This means you
198 |
Chapter 12 • Native Networking (Qnet) |
October 16, 2008 |
♥ 2008, QNX Software Systems GmbH & Co. KG. |
Name resolution and lookup |
can access regular files or manager processes on other Qnet nodes as if they were executing on your local node.
Recall our open() example above. If you wanted to open a serial device on node1 instead of on your local machine, you simply specify the path:
fd = open("/net/node1/dev/ser1",O_RDWR...); /*Open a serial device on node1*/
For client-server communications, how does the client know what node descriptor to use for the server?
The client uses the filesystem’s pathname space to “look up” the server’s address. In the single-machine case, the result of that lookup will be a node descriptor, a process ID, and a channel ID. In the networked case, the results are the same — the only difference will be the value of the node descriptor.
If node descriptor is: |
Then the server is: |
0 (or ND_LOCAL_NODE) |
Local (i.e. “this node”) |
Nonzero |
Remote |
File descriptor (connection ID)
The practical result in both the local and networked case is that when the client connects to the server, the client gets a file descriptor (or connection ID in the case of kernel calls such as ConnectAttach()). This file descriptor is then used for all subsequent message-passing operations. Note that from the client’s perspective, the file descriptor is identical for both the local and networked case.
Behind a simple open()
Let’s return to our open() example. Suppose a client on one node (lab1) wishes to use the serial port (/dev/ser1) on another node (lab2). The client will effectively perform an open() on the pathname /net/lab2/dev/ser1.
The following diagram shows the steps involved when the client open()’s
/net/lab2/dev/ser1:
October 16, 2008 |
Chapter 12 • Native Networking (Qnet) 199 |

Name resolution and lookup |
|
|
♥ 2008, QNX Software Systems GmbH & Co. KG. |
lab1 |
|
|
lab2 |
Process |
|
|
Process |
|
|
|
|
manager |
|
|
manager |
|
|
|
|
|
|
3 |
|
|
|
3 |
|
1 |
Qnet |
|
Qnet |
|
|||
2 |
|
|
|
|
4 |
|
|
|
|
|
|
3 |
|
4 |
|
|
|
|
|
Client |
4 |
|
Serial |
|
driver |
||
|
|
|
A client-server message pass across the network.
Here are the interactions:
1A message is sent from the client to its local process manager, effectively asking who should be contacted to resolve the pathname /net/lab2/dev/ser1.
Since the native network manager (lsm-qnet.so) has taken over the entire /net namespace, the process manager returns a redirect message, saying that the client should contact the local network manager for more information.
2The client then sends a message to the local network manager, again asking who should be contacted to resolve the pathname.
The local network manager then replies with another redirect message, giving the node descriptor, process ID, and channel ID of the process manager on node lab2 — effectively deferring the resolution of the request to node lab2.
3The client then creates a connection to the process manager on node lab2, once again asking who should be contacted to resolve the pathname.
The process manager on node lab2 returns another redirect, this time with the node descriptor, channel ID, and process ID of the serial driver on its own node.
4The client creates a connection to the serial driver on node lab2, and finally gets a connection ID that it can then use for subsequent message-passing operations.
After this point, from the client’s perspective, message passing to the connection ID is identical to the local case. Note that all further message operations are now direct between the client and server.
The key thing to keep in mind here is that the client isn’t aware of the operations taking place; these are all handled by the POSIX open() call. As far as the client is concerned, it performs an open() and gets back a file descriptor (or an error indication).
200 |
Chapter 12 • Native Networking (Qnet) |
October 16, 2008 |

♥ 2008, QNX Software Systems GmbH & Co. KG. |
Name resolution and lookup |
In each subsequent name-resolution step, the request from the client is stripped of already-resolved name components; this occurs automagically within the resource manager framework. This means that in step 2 above, the relevant part of the request is lab2/dev/ser1 from the perspective of the local network manager. In step 3, the relevant part of the request has been stripped to just dev/ser1, because that’s all that lab2’s process manager needs to know. Finally, in step 4, the relevant part of the request is simply ser1, because that’s all the serial driver needs to know.
Global Name Service (GNS)
In the examples shown so far, remote services or files are located on known nodes or at known pathnames. For example, the serial port on lab1 is found at
/net/lab1/dev/ser1.
GNS allows you to locate services via an arbitrary name wherever the service is located, whether on the local system or on a remote node. For example, if you wanted to locate a modem on the network, you could simply look for the name “modem.” This would cause the GNS server to locate the “modem” service, instead of using a static path such as /net/lab1/dev/ser1. The GNS server can be deployed such that it services all or a portion of your Qnet nodes. And you can have redundant GNS servers.
Network naming
As mentioned earlier, the pathname prefix /net is the most common name that lsm-qnet.so uses. In resolving names in a network-wide pathname space, the following terms come into play:
node name |
A character string that identifies the node you’re talking to. Note |
|
that a node name can’t contain slashes or dots. In the example |
|
above, we used lab2 as one of our node names. The default is |
|
fetched via confstr() with the _CS_HOSTNAME parameter. |
node domain |
A character string that’s “tacked” onto the node name by |
|
lsm-qnet.so. Together the node name and node domain must |
|
form a string that’s unique for all nodes that are talking to each |
other. The default is fetched via confstr() with the _CS_DOMAIN parameter.
fully qualified node name (FQNN)
The string formed by tacking the node name and node domain together. For example, if the node name is lab2 and the node domain name is qnx.com, the resulting FQNN would be: lab2.qnx.com.
network directory A directory in the pathname space implemented by lsm-qnet.so. Each network directory (there can be more than
October 16, 2008 |
Chapter 12 • Native Networking (Qnet) 201 |

Redundant Qnet: Quality of Service (QoS) and multiple paths |
♥ 2008, QNX Software Systems GmbH & Co. KG. |
||
|
one on a node) has an associated node domain. The default is |
|
|
|
/net, as used in the examples in this chapter. |
||
name resolution |
The process by which lsm-qnet.so converts an FQNN to a |
||
|
list of destination addresses that the transport layer knows how |
||
|
to get to. |
|
|
name resolver |
A piece of code that implements one method of converting an |
||
|
FQNN to a list of destination addresses. Each network directory |
has a list of name resolvers that are applied in turn to attempt to resolve the FQNN. The default is en_ionet (see the next section).
Quality of Service (QoS)
A definition of connectivity between two nodes. The default QoS is loadbalance (see the section on QoS later in this chapter.)
Resolvers
The following resolvers are built into the network manager:
•en_ionet — Broadcast requests for name resolution on the LAN (similar to the TCP/IP ARP protocol). This is the default.
•dns — Take the node name, add a dot (.) followed by the node domain, and send the result to the TCP/IP gethostbyname() function.
•file — Search for accessible nodes, including the relevant network address, in a static file.
Redundant Qnet: Quality of Service (QoS) and multiple paths
Quality of Service (QoS) is an issue that often arises in high-availability networks as well as realtime control systems. In the Qnet context, QoS really boils down to transmission media selection — in a system with two or more network interfaces, Qnet will choose which one to use according to the policy you specify.
If you have only a single network interface, the QoS policies don’t apply at all.
QoS policies
Qnet supports transmission over multiple networks and provides the following policies for specifying how Qnet should select a network interface for transmission:
202 |
Chapter 12 • Native Networking (Qnet) |
October 16, 2008 |