
Network Plus 2005 In Depth
.pdf
262 |
|
|
Chapter 6 TOPOLOGIES AND ACCESS METHODS |
|
|
||||
|
|
|
|
|
|
|
traffic loads and reduce congestion. Note, however, that switches are not always the best answer |
||
NET+ |
|
|
1.2to heavy traffic and a need for greater speeds. In a case in which an enterprise-wise Ethernet LAN
2.14is generally overtaxed, you should consider upgrading the network’s design or infrastructure.
NET+ Ethernet Frames
1.2You have already been introduced to data frames, the packages that carry higher-layer data and control information that enable data to reach their destinations without errors and in the correct sequence. Ethernet networks may use one (or a combination) of four kinds of data frames: Ethernet_802.2 (“Raw”), Ethernet_802.3 (“Novell proprietary”), Ethernet_II (“DIX”), and Ethernet_SNAP. This variety of Ethernet frame types came about as different organizations released and revised Ethernet standards during the 1980s, changing as LAN technology evolved. Each frame type differs slightly in the way it codes and decodes packets of data traveling from one device to another.
Physical layer standards, such as 10BASE-T or 100BASE-TX, have no effect on the type of framing that occurs in the Data Link layer. Thus, Ethernet frame types have no relation to the topology or cabling characteristics of the network. Framing also takes place independently of the higher-level layers. Theoretically, all frame types could carry any one of many higher-layer protocols. For example, a single Ethernet_II data frame may carry either TCP/IP or AppleTalk data (but not both simultaneously). But as you’ll learn in the following discussion, not all frame types are well suited to carrying all kinds of traffic.
Using and Configuring Frames
You can use multiple frame types on a network, but you cannot expect interoperability between the frame types. For example, in a mixed environment of NetWare 4.11 and UNIX servers, your network might support both Ethernet_802.2 and Ethernet_II frames. A workstation connecting to the NetWare 4.11 server might be configured to use the Ethernet_802.2 frame, whereas a workstation connecting to the UNIX server would likely use the Ethernet_II frame.
A node’s Data Link layer services must be properly configured to expect the types of frames it might receive. If a node receives an unfamiliar frame type, it will not be able to decode the data contained in the frame, nor will it be able to communicate with nodes configured to use that frame type. For this reason, it is important for LAN administrators to ensure that all devices use the same, correct frame type. These days almost all networks use the Ethernet_II frame type. But in the 1990s, before this uniformity evolved, the use of different NOSs or legacy hardware often required managing devices to interpret multiple frame types.
Frame types are typically specified through a device’s NIC configuration software. To make matters easier, most NICs can automatically sense what types of frames are running on a network and adjust themselves to that specification. This feature is called auto-detect, or autosense. Workstations, networked printers, and servers added to an existing network can all take advantage of auto-detection. Even if your devices use the auto-detect feature, you should nevertheless know what frame types are running on your network so that you can troubleshoot connectivity problems. As easy as it is to configure, the auto-detect feature is not infallible.

ETHERNET |
Chapter 6 263 |
NET+ Frame Fields
1.2All Ethernet frame types share many fields in common. For example, every Ethernet frame contains a 7-byte preamble and a 1-byte start-of-frame delimiter. The preamble signals to the receiving node that data is incoming and indicates when the data flow is about to begin. The SFD (start-of-frame delimiter) identifies where the data field begins. Preambles and SFDs are not included, however, when calculating a frame’s total size.
Each Ethernet frame also contains a 14-byte header, which includes a destination address, a source address, and an additional field that varies in function and size, depending on the frame type. The destination address and source address fields are each 6 bytes long. The destination address identifies the recipient of the data frame, and the source address identifies the network node that originally sent the data. Recall that any network device can be identified by its physical address, also known as a hardware address or Media Access Control (MAC) address. The source address and destination address fields of an Ethernet frame use the MAC address to identify where data originated and where it should be delivered.
Also, all Ethernet frames contain a 4-byte FCS (Frame Check Sequence) field. Recall that the function of the FCS field is to ensure that the data at the destination exactly matches the data issued from the source using the CRC (Cyclic Redundancy Check) algorithm. Together, the FCS and the header make up the 18-byte “frame” for the data. The data portion of an Ethernet frame may contain from 46 to 1500 bytes of information (and recall that this includes the Network layer datagram). If fewer than 46 bytes of data are supplied by the higher layers, the source node fills out the data portion with extra bytes until it totals 46 bytes. The extra bytes are known as padding and have no significance other than to fill out the frame. They do not affect the data being transmitted.
Adding the 18-byte framing portion plus the smallest possible data field of 46 bytes equals the minimum Ethernet frame size of 64 bytes. Adding the framing portion plus the largest possible data field of 1500 bytes equals the maximum Ethernet frame size of 1518 bytes. No matter what frame type is used, the size range of 64 to 1518 total bytes applies to all Ethernet frames.
Because of the overhead present in each frame and the time required to enact CSMA/CD, the use of larger frame sizes on a network generally results in faster throughput. To some extent, you cannot control your network’s frame sizes. You can, however, help improve network performance by properly managing frames. For example, network administrators should strive to minimize the number of broadcast frames on their networks, because broadcast frames tend to be very small and, therefore, inefficient. Also, running more than one frame type on the same network can result in inefficiencies, because it requires devices to examine each incoming frame to determine its type. Given a choice, it’s most efficient to support only one frame type on a network.
Ethernet_II (“DIX”)
Ethernet_II is an Ethernet frame type developed by DEC, Intel, and Xerox (abbreviated as DIX) before the IEEE began to standardize Ethernet. The Ethernet_II frame type is similar

264 |
|
|
Chapter 6 TOPOLOGIES AND ACCESS METHODS |
|
|
||||
|
|
|
|
|
|
|
to the older Ethernet_802.3 and Ethernet_802.2 frame types, but differs in one field. Where |
||
NET+ |
|
|
1.2the other types contain a 2-byte length field, the Ethernet_II frame type contains a 2-byte type field. This type field identifies the Network layer protocol (such as IP, ARP, RARP, or IPX) contained in the frame. For example, if a frame were carrying an IP datagram, its type field would contain “0x0800,” the type code for IP. Because Ethernet_802.2 and Ethernet_802.3 frames do not contain a type field, they are only capable of transmitting data over a single Network layer protocol (for example, only IP and not both IP and ARP) across the network. For TCP/IP networks, which commonly use multiple Network layer protocols, these frame types are unsuitable.
Like Ethernet_II, the Ethernet_SNAP frame type also provides a type field. However, the Ethernet_SNAP standard calls for additional control fields, so that compared to Ethernet_II frames, the Ethernet_SNAP frames allow less room for data. Therefore, because of its support for multiple Network layer protocols and because it uses fewer bytes as overhead, Ethernet_II is the frame type most commonly used on contemporary Ethernet networks. Figure 6-13 depicts an Ethernet_II frame.
FIGURE 6-13 Ethernet_II (“DIX”) frame
PoE (Power over Ethernet)
Recently, IEEE has finalized the 802.3af standard, which specifies a method for supplying electrical power over Ethernet connections, also known as PoE (Power over Ethernet). Although the standard is new, the concept is not. In fact, your home telephone receives power from the telephone company over the lines that come into your home. This power is necessary for dial tone and ringing. On an Ethernet network, carrying power over signaling connections can be useful for nodes that are far from traditional power receptacles or need a constant, reliable power source. For example, a wireless access point at an outdoor theater, a telephone used to receive digitized voice signals, an Internet gaming station in the center of a mall, or a critical router at the core of a network’s backbone can all benefit from PoE.
The PoE standard specifies two types of devices: power sourcing equipment (PSE) and PDs (powered devices). Power sourcing equipment (PSE) refers to the device that supplies the power; usually this device depends on backup power sources (in other words, not the electrical grid maintained by utilities). Powered devices (PDs) are those that receive the power from the PSE. PoE requires CAT 5 or better copper cable. In the cable, electric current may run over an unused pair of wires or over the pair of wires used for data transmission in a 10BASE-T,

TOKEN RING |
Chapter 6 265 |
100BASE-TX, or 1000BASE-T network. The standard allows for both approaches; however, on a single network, the choice of current-carrying pairs should be consistent between all PSE and PDs. Not all end nodes are capable of receiving PoE. The IEEE standard has accounted for that possibility by requiring all PSE to first determine whether a node is PoE capable before attempting to supply it with power. That means that PoE is compatible with current 802.3 installations. No special modifications need to be made to existing networks before adding this new feature.
Token Ring
NET+
1.2
Now that you have learned about the many forms of Ethernet, you are ready to learn about Token Ring, a less common, but still important network access method. Token Ring is a network technology first developed by IBM in the 1980s. In the early 1990s, the Token Ring architecture competed strongly with Ethernet to be the most popular access method. Since that time, the economics, speed, and reliability of Ethernet have improved, leaving Token Ring behind. Because IBM developed Token Ring, a few IBM-centric IT Departments continue to use it. Other network managers have changed their former Token Ring networks into Ethernet networks.
Token Ring networks have traditionally been more expensive to implement than Ethernet networks. Proponents of the Token Ring technology argue that, although some of its connectivity hardware is more expensive, its reliability results in less downtime and lower network management costs than Ethernet. On a practical level, Token Ring has probably lost the battle for superiority because its developers were slower to develop high-speed standards. Token Ring networks can run at either 4, 16, or 100 Mbps. The 100-Mbps Token Ring standard, finalized in 1999, is known as HSTR (High-Speed Token Ring). HSTR can use either twisted-pair or fiber-optic cable as its transmission medium. Although it is as reliable and efficient, it is still less common than Ethernet because of its higher cost and lagging speed.
Token Ring networks use the token-passing routine and a star-ring hybrid physical topology. In token passing, a 3-byte packet, called a token, is transmitted from one node to another in a circular fashion around the ring. When a station has something to send, it picks up the token, changes it to a frame, and then adds the header, information, and trailer fields. The header includes the address of the destination node. All nodes read the frame as it traverses the ring to determine whether they are the intended recipient of the message. If they are, they pick up the data, then retransmit the frame to the next station on the ring. When the frame finally reaches the originating station, the originating workstation reissues a free token that can then be used by another station. The token-passing control scheme avoids the possibility for collisions. This fact makes Token Ring more reliable and efficient than Ethernet. It also does not impose distance limitations on the length of a LAN segment, unlike CSMA/CD.
On a Token Ring network, one workstation, called the active monitor, acts as the controller for token passing. Specifically, the active monitor maintains the timing for ring passing, monitors

266 |
|
|
Chapter 6 TOPOLOGIES AND ACCESS METHODS |
|
|
||||
|
|
|
|
|
|
|
token and frame transmission, detects lost tokens, and corrects errors when a timing error or |
||
NET+ |
|
|
1.2other disruption occurs. Only one workstation on the ring can act as the active monitor at any given time.
NOTE
The Token Ring architecture is often mistakenly described as a pure ring topology. In fact, its logical topology is a ring. However, its physical topology is a star-ring hybrid in which data circulate in a ring fashion, but the layout of the network is a star.
IEEE standard 802.5 describes the specifications for Token Ring technology. Token Ring networks transmit data at either 4, 16, or 100 Mbps over shielded or unshielded twisted-pair wiring. You may have as many as 255 addressable stations on a Token Ring network that uses shielded twisted-pair or as many as 72 addressable stations on one that uses unshielded twistedpair. All Token Ring connections rely on a NIC that taps into the network through a MAU (Multistation Access Unit), Token Ring’s equivalent of a hub. NICs can be designed and configured to run specifically on 4-, 16-, or 100-Mbps networks, or they can be designed to accommodate all three data transmission rates. In the star-ring hybrid topology, the MAU completes the ring internally with Ring In and Ring Out ports at either end of the unit. In addition, MAUs typically provide eight ports for workstation connections. You can easily expand a Token Ring network by connecting multiple MAUs through their Ring In and Ring Out ports, as shown in Figure 6-14. Unused ports on a MAU, including Ring In and Ring Out ports, have self-shorting data connectors that internally close the loop.
FIGURE 6-14 Interconnected Token Ring MAUs

FDDI (FIBER DISTRIBUTED DATA INTERFACE) |
Chapter 6 267 |
NET+ |
The self-shorting feature of Token Ring MAU ports makes Token Ring highly fault-tolerant. |
1.2For example, if you discover a problematic NIC on the network, you can remove that workstation’s cable from the MAU, and the MAU’s port will close the ring internally. Similarly, if you discover a faulty MAU, you can remove it from the ring by disconnecting its Ring In and Ring Out cables from its adjacent MAUs and connect the two good MAUs to each other to close the loop.
A Token Ring network may use one of three types of connectors on its cables: RJ-45, DB-9, or type 1 IBM. Modern Token Ring networks with UTP cabling use RJ-45 connectors, which are identical to the RJ-45 connector used on 10BASE-T or 100BASE-T Ethernet networks. Token Ring networks with STP cabling may use a type 1 IBM connector, which is depicted in Figure 6-15. Type 1 IBM connectors contain interlocking tabs that snap into an identical connector when one of the connectors is flipped upside-down, making for a secure connection. A DB-9 connector (containing nine pins) is another type of connector found on STP Token Ring networks. This connector is also pictured in Figure 6-15.
FIGURE 6-15 Type 1 IBM and DB-9 Token Ring connectors
FDDI (Fiber Distributed Data Interface)
NET+ |
FDDI (Fiber Distributed Data Interface) is a network technology whose standard was orig- |
1.2inally specified by ANSI in the mid-1980s and later refined by ISO. FDDI (pronounced
2.14“fiddy”) uses a double ring of multimode or single-mode fiber to transmit data at speeds of 100 Mbps. FDDI was developed in response to the throughput limitations of Ethernet and Token Ring technologies used at the time. In fact, FDDI was the first network technology to reach the 100-Mbps threshold. For this reason, you will frequently find it supporting network backbones that were installed in the late 1980s and early 1990s. FDDI is used on WANs and MANs. For example, FDDI can connect LANs located in multiple buildings, such as those on college campuses. FDDI links can span distances as large as 62 miles. Because Ethernet and Token Ring technologies have developed faster transmission speeds, FDDI is no longer the much-coveted technology that it was in the 1980s.

268 |
|
|
Chapter 6 TOPOLOGIES AND ACCESS METHODS |
|
|
||||
|
|
|
|
|
|
|
Nevertheless, FDDI is a stable technology that offers numerous benefits. Its reliance on fiber- |
||
NET+ |
|
|
1.2optic cable ensures that FDDI is more reliable and more secure than transmission methods that
2.14depend on copper wiring. Another advantage of FDDI is that it works well with Ethernet 100BASE-TX technology.
One drawback to FDDI technology is its high cost relative to Fast Ethernet (costing up to 10 times more per switch port than Fast Ethernet). If an organization has FDDI installed, however, it can use the same cabling to upgrade to Fast Ethernet or Gigabit Ethernet, with only minor differences to consider, such as Ethernet’s lower maximum segment length.
FDDI is based on ring topologies similar to a Token Ring network, as shown in Figure 6-16. It also relies on the same token-passing routine that Token Ring networks use. However, unlike Token Ring technology, FDDI runs on two complete rings. During normal operation, the primary FDDI ring carries data, while the secondary ring is idle. The secondary ring will assume data transmission responsibilities should the primary ring experience Physical layer problems. This redundancy makes FDDI networks extremely reliable.
FIGURE 6-16 A FDDI network
ATM (Asynchronous Transfer Mode)
ATM (Asynchronous Transfer Mode) is an ITU networking standard describing Data Link layer protocols for both network access and signal multiplexing. It was first conceived by researchers at Bell Labs in 1983 as a higher-bandwidth alternative to FDDI, but it took a dozen years before standards organizations could reach an agreement on its specifications. ATM may run over fiber-optic or CAT 5 or higher UTP or STP cable. It is typically used on WANs, particularly by large public telecommunication carriers.

ATM (ASYNCHRONOUS TRANSFER MODE) |
Chapter 6 269 |
Like Token Ring and Ethernet, ATM specifies Data Link layer framing techniques. But what sets ATM apart from Token Ring and Ethernet is its fixed packet size. In ATM, a packet is called a cell and always consists of 48 bytes of data plus a 5-byte header. This fixed packet size allows ATM to provide predictable network performance. However, recall that a smaller packet size requires more overhead. In fact, ATM’s smaller packet size does decrease its potential throughput, but the efficiency of using cells compensates for that loss.
Another unique aspect of ATM technology is that it relies on virtual circuits. Virtual circuits are connections between network nodes that, although based on potentially disparate physical links, logically appear to be direct, dedicated links between those nodes. On an ATM network, switches determine the optimal path between the sender and receiver, then establish this path before the network transmits data. One advantage to virtual circuits is their configurable (and therefore, potentially more efficient) use of limited bandwidth. Several virtual circuits can be assigned to one length of cable or even to one channel on that cable. A virtual circuit uses the channel only when it needs to transmit data. Meanwhile, the channel is available for use by other virtual circuits.
Because ATM packages data into cells before transmission, each of which travels separately to its destination, ATM is typically considered a packet-switching technology. At the same time, the use of virtual circuits means that ATM provides the main advantage of circuit switching— that is, a point-to-point connection that remains reliably available to the transmission until it completes, making ATM a connection-oriented technology. Establishing a reliable connection allows ATM to guarantee a specific QoS (quality of service) for certain transmissions. QoS is a standard that specifies that data will be delivered within a certain period of time after it is sent. ATM networks can supply four QoS levels, from a “best effort” attempt for noncritical data to a guaranteed, real-time transmission for time-sensitive data. This is important for organizations using networks for time-sensitive applications, such as video and audio transmissions. For example, a company that wants to use its physical connection between two offices located at opposite sides of a state to carry its voice phone calls might choose the ATM network technology with the highest possible QoS to carry that data. On the other hand, they may assign a low QoS to routine e-mail messages exchanged between the two offices. Without QoS guarantees, cells belonging to the same message may arrive in the wrong order or too slowly to be properly interpreted by the receiving node.
ATM’s developers have made certain it is compatible with other leading network technologies. Its cells can support multiple types of higher-layer protocols, including TCP/IP, AppleTalk, and IPX/SPX. In addition, the ATM networks can be integrated with Ethernet or Token Ring networks through the use of LANE (LAN Emulation). LANE encapsulates incoming Ethernet or Token Ring frames, then converts them into ATM cells for transmission over an ATM network.
Currently, ATM is expensive and, because of its cost, it is rarely used on small LANs and almost never used to connect typical workstations to a network. Gigabit Ethernet, a faster, cheaper technology, poses a substantial threat to ATM. In addition to its lower cost, Gigabit Ethernet is a more natural upgrade for the multitude of Fast Ethernet users. It overcomes the QoS issue

270 Chapter 6 TOPOLOGIES AND ACCESS METHODS
by simply providing a larger pipe for the greater volume of traffic using the network. Although ATM caught on among the very largest carriers in the late 1990s, most networking professionals have followed the Gigabit Ethernet standard rather than spending extra dollars on ATM infrastructure.
Wireless Networks
NET+ |
Similar to the development of wire-bound network access technologies, the development of |
1.7wireless access methods did not follow one direct and cooperative path, but grew from the efforts of multiple vendors and organizations. Now, a handful of different wireless technologies are available. Each wireless technology is defined by a standard that describes unique functions at both the Physical and the Data Link layers of the OSI Model. These standards differ in their specified signaling methods, geographic ranges, and frequency usages, among other things. Such differences make certain technologies better suited to home networks and others better suited to networks at large organizations. The most popular wireless standards used on contemporary LANs are those developed by IEEE’s 802.11 committee.
802.11
The IEEE released its first wireless network standard in 1997. Since then, its WLAN (Wireless Local Area Networks) standards committee, also known as the 802.11 committee, has published several distinct standards related to wireless networking. Each IEEE wireless network access standard is named after the 802.11 task group (or subcommittee) that developed it. The three IEEE 802.11 task groups that have generated notable wireless standards are: 802.11b, 802.11a, and 802.11g. These three 802.11 standards share many characteristics. For example, although some of their Physical layer services vary, all three use half-duplex signaling. In other words, a wireless station using one of the 802.11 techniques can either transmit or receive, but cannot do both simultaneously (assuming the station has only one transceiver installed, as is usually the case). In addition, all 802.11 networks follow the same MAC (Media Access Control) sublayer specifications, as described in the following sections.
Access Method
You have learned that the MAC sublayer of the Data Link layer is responsible for appending physical addresses to a data frame and for governing multiple nodes’ access to a single medium. As with 802.3 (Ethernet), the 802.11 MAC services append 48-bit (or 6-byte) physical addresses to a frame to identify its source and destination. The use of the same physical addressing scheme allows 802.11 networks to be easily combined with other IEEE 802 networks, including Ethernet networks. However, because wireless devices are not designed to transmit and receive simultaneously (and therefore cannot quickly detect collisions), 802.11 networks use a different access method than Ethernet networks.

WIRELESS NETWORKS |
Chapter 6 271 |
NET+ 802.11 standards specify the use of CSMA/CA (Carrier Sense Multiple Access with Colli-
1.7sion Avoidance) to access a shared medium. Using CSMA/CA, before a station begins to send data on an 802.11 network, it checks for existing wireless transmissions. If the source node detects no transmission activity on the network, it waits a brief, random amount of time, and then sends its transmission. If the source does detect activity, it waits a brief period of time before checking the channel again. The destination node receives the transmission and, after verifying its accuracy, issues an acknowledgment (ACK) packet to the source. If the source receives this acknowledgment, it assumes the transmission was properly completed. However, interference or other transmissions on the network could impede this exchange. If, after transmitting a message, the source node fails to receive acknowledgment from the destination node, it assumes its transmission did not arrive properly, and it begins the CSMA/CA process anew. Compared to CSMA/CD, CSMA/CA minimizes, but does not eliminate, the potential for collisions.
The use of ACK packets to verify every transmission means that 802.11 networks require more overhead than 802.3 networks. Therefore, a wireless network with a theoretical maximum throughput of 10 Mbps will in fact transmit much less data per second than a wire-bound Ethernet network with the same theoretical maximum throughput. In reality, wireless networks tend to achieve between one-third and one-half of their theoretical maximum throughput. For example, the fastest type of 802.11 network, 802.11g, is rated for a maximum of 54 Mbps; most 802.11g networks achieve between 20 and 25 Mbps.
One way to ensure that packets are not inhibited by other transmissions is to reserve the medium for one station’s use. In 802.11 this can be accomplished through the optional RTS/CTS (Request to Send/Clear to Send) protocol. RTS/CTS enables a source node to issue an RTS signal to an access point requesting the exclusive opportunity to transmit. If the access point agrees by responding with a CTS signal, the access point temporarily suspends communication with all stations in its range and waits for the source node to complete its transmission. RTS/CTS is not routinely used by wireless stations, but for transmissions involving large packets (those more subject to damage by interference), it can prove more efficient. On the other hand, using RTS/CTS further decreases the overall efficiency of the 802.11 network.
Association
Suppose you have just purchased a new laptop with a wireless NIC and support for one of the 802.11 wireless standards. When you bring your laptop to a local Internet café and turn it on, your laptop soon prompts you to log on to the café’s wireless network to gain access to the Internet. This seemingly simple process, known as association, involves a number of packet exchanges between the café’s access point and your computer. Association is another function of the MAC sublayer described in the 802.11 standard.
As long as a station is on and has its wireless protocols running, it periodically surveys its surroundings for evidence of an access point, a task known as scanning. A station can use either active scanning or passive scanning. In active scanning, the station transmits a special frame,