Addressing
This  section describes physical and network layer addressing and how routers  use these addresses. The section concludes with a brief introduction to  IP addressing.
 Physical Addresses
 MAC  addresses were discussed earlier; recall that these are at the data  link layer and are considered physical addresses. When a network  interface card is manufactured, it is assigned an address—called a burned-in address  (BIA)—that doesn’t change when the network card is installed in a  device and is moved from one network to another. Typically, this BIA is  copied to interface memory and is used as the interface’s MAC address.  MAC addresses are analogous to Social Insurance numbers or Social  Security numbers—one is assigned to each person, and the numbers  don’t change when that person moves to a new house. These numbers are  associated with the physical person, not where the person lives.
| Note |   
  |  
| Note |   
  |  
Knowing  the MAC address assigned to a PC or to a router’s interface doesn’t  tell you anything about where it is or what network it is attached to—it  can’t help a router determine the best way to send data to it. For that  you need logical network layer addresses; they are assigned when a  device is installed on a network and should be changed when the device  is moved.
 Logical Addresses
 When  you send a letter to someone, you have to know that person’s postal  address. Because every postal address in the world is unique, you can  potentially send a letter to anyone in the world. Postal addresses are  logical and hierarchical—for example, they include the country,  province/state, street, and building/house number. The top portion of Figure 1-14  illustrates Main Street with various houses. All these houses have one  portion of their address in common—Main Street—and one portion that is  unique—their house number.
| Note |   
  |   
The lower portion of Figure 1-14  illustrates a network, 17, with various PCs on it. All these PCs have  one portion of their address in common—17—and one part that is  unique—their device number. Devices on the same logical network must  share the same network portion of their address and have different  device portions.
 Routing and Network Layer Addresses
 A  router typically looks at only the network portion of a destination  address. It compares the network portion to its routing table, and if it  finds a match, it sends the packet out the appropriate interface,  toward its destination.
A  router needs to concern itself only with the device portion of a  destination address if it is directly connected to the same network as  the destination. In this case, the router must send the packet directly  to the appropriate device, and it needs to use the entire destination  address for this. A router  on a LAN uses ARP to determine the MAC address of the device with that  IP address and then creates an appropriate frame with that MAC address  as the destination MAC address.
 IP Addresses
 IP addresses are network layer addresses. As you saw earlier, IP addresses are 32-bit numbers. As shown in Figure 1-15, the 32 bits are usually written in dotted-decimal notation—they  are grouped into 4 octets (8 bits each), separated by dots, and each  octet is represented in decimal format. Each bit in the octet has a  binary weight (the highest is 128 and the next is 64, followed by 32,  16, 8, 4, 2, and 1). Thus, the minimum value for an octet is 0, and the  maximum decimal value for an octet is 255.
| Note |   
  |  
| Note |   
  |  
IP Address Classes
IPv4  addresses are categorized into five classes: A, B, C, D, and E. Only  Class A, B, and C addresses are used for addressing devices; Class D is  used for multicast groups, and Class E is reserved for experimental use.
The first octet of an IPv4 address defines which class it is in, as illustrated in Table 1-1 for Class A, B, and C addresses. The address class determines which part of the address represents the network  bits (N) and which part represents the host bits (H), as shown in this  table. The number of networks available in each class and the number of  hosts per network are also shown.
|   
  |   
  |   
  |   
  |   
  |   
  |  
|---|---|---|---|---|---|
|   
  |   
  |   
  |    
  |    
  |    
  |   
|   
  |    
  |   
  |   
  |   
  |   
  |  
|   
  |   
  |   
  |   
  |   
  |   
  |  
| Note |   
  |  
| Note |   
  |  
For  example, 192.168.5.1 is a Class C address. Therefore, it is in the  format N.N.N.H—the network part is 192.168.5 and the host part is 1.
Private and Public IP Addresses
The  IPv4 address space is divided into public and private sections. Private  addresses are reserved addresses to be used only internally within a  company’s network, not on the Internet. When you want  to send anything on the Internet, private addresses must be mapped to a  company’s external registered address. Public IPv4 addresses are  provided for external communication.
| Note |   
  |  
Note that all the IP addresses used in this book are private addresses, to avoid publishing anyone’s registered address.
Subnets
As illustrated in Table 1-1,  Class A addresses have little use in a normal organization—most  companies would not want one network with more than 16 million PCs on  it! This would not be physically possible or desirable. Because of this  limitation on addresses when only their class is considered (called classful addressing) and the finite number of such addresses, subnets were introduced by RFC 950, Internet Standard Subnetting Procedure.
Class A, B, and C addresses can be divided into smaller networks, called subnetworks or subnets, resulting in a larger number of possible networks, each with fewer host addresses available than the original network.
The  addresses used for the subnets are created by borrowing bits from the  host field and using them as subnet bits; a subnet mask indicates which  bits have been borrowed. A subnet mask is a  32-bit value associated with an IP address to specify which bits in the  address represent network and subnet bits and which represent host bits.  Using subnet masks creates a three-level hierarchy: network, subnet,  and host.
The default subnet masks for Class A, B, and C addresses are shown Table 1-2.
|   
  |   
  |   
  |  
|---|---|---|
|   
  |   
  |   
  |  
|   
  |   
  |   
  |  
|   
  |    
  |   
  |  
When all of an address’s host bits are 0, the address is for the subnet itself (sometimes called the wire).  When all of an address’s host bits are 1, the address is the directed  broadcast address for that subnet (in other words, for all the devices  on that subnet). 
| Note |   
 
  |  
 For  example, 10.0.0.0 is a Class A address with a default subnet mask of  255.0.0.0, indicating 8 network bits and 24 host bits. If you want to  use 8 of the host bits as subnet bits instead, you would use a subnet  mask of 11111111.11111111.00000000.00000000, which is 255.255.0.0 in  decimal format. You could then use the 8 subnet bits to address 256  subnets. Each of these subnets could support up to 65,534 hosts. The  address of one of the subnets is 10.1.0.0; the broadcast address on this  subnet is 10.1.255.255.
Another way of indicating the subnet mask is to use a prefix. A prefix  is a slash (/) followed by a numeral that is the number of bits in the  network and subnet portion of the address—in other words, the number of  contiguous 1s that would be in the subnet mask. For example, the subnet  mask of 255.255.240.0 is 11111111.11111111.11110000.00000000 in binary  format, which is 20 1s followed by 12 0s. Therefore, the prefix would be  /20 for the 20 bits of network and subnet information, the number of 1s  in the mask.
IP addressing is further explored in Appendix B; IP address planning is discussed in Chapter 6.
Switching Types
Switches  were initially introduced to provide higher-performance connectivity  than hubs because switches define multiple collision domains. Switches  have always been able to process data at a faster rate than routers  because the switching functionality is implemented in hardware—in  Application-Specific Integrated Circuits (ASIC)—rather than in software,  which is how routing has traditionally been implemented. However,  switching was initially restricted to the examination of Layer 2 frames.  With the advent of more powerful ASICs, switches can now process Layer 3  packets, and even the contents of those packets, at high speeds.
The  following sections first examine the operation of traditional Layer 2  switching. Layer 3 switching—which is really routing in hardware—is then  explored.
 Layer 2 Switching
  The  heart of a Layer 2 switch is its MAC address table, also known as its  content-addressable memory. This table contains a list of the MAC  addresses that are reachable through each switch port. Recall that a  physical MAC address uniquely identifies a device on a network. When a  switch is first powered up, its MAC address table is empty, as shown in Figure 1-16.
 In  this sample network, consider what happens when device A sends a frame  destined for device D. The switch receives the frame on port 1 (from  device A). Recall that a frame includes the MAC address of the source  device and the MAC address of the destination device. Because the switch  does not yet know where device D is, the switch must flood  the frame out of all the other ports; therefore, the switch sends the  frame out of ports 2, 3, and 4. This means that devices B, C, and D all  receive the frame. Only device D, however, recognizes its MAC address as  the destination address in the frame; it is the only device on which  the CPU is interrupted to further process the frame. 
In  the meantime, the switch now knows that device A can be reached on port  1 because the switch received a frame from device A on port 1; the  switch therefore puts the MAC address of device A in its MAC address table for port 1. This process is called learning—the switch is learning all the MAC addresses it can reach. 
At  some point, device D is likely to reply to device A. At that time, the  switch receives a frame from device D on port 4; the switch records this  information in its MAC address table as part of its learning process.  This time, the switch knows where the destination, device A, is; the  switch therefore forwards the frame only out of port 1. This process is  called filtering—the switch sends the frames out  of only the port through which they need to go, when the switch knows  which port that is, rather than flooding them out of every port. This  reduces the traffic on the other ports and reduces the interruptions  that the other devices experience. Over time, the switch learns where  all the devices are, and the MAC address table is fully populated, as  shown in Figure 1-17.
 The  filtering process also means that multiple simultaneous conversations  can occur between different devices. For example, if device A and device  B want to communicate, the switch sends their data between ports 1 and  2; no traffic goes on ports 3 or 4. At the same time, devices C and D  can communicate on ports 3 and 4 without interfering with the traffic on  ports 1 and 2. Consequently, the network’s overall throughput has  increased dramatically.
The  MAC address table is kept in the switch’s memory and has a finite size  (depending on the specific switch used). If many devices are attached to  the switch, the switch might not have room for an entry for every one,  so the table entries time out after a period of not being used. As a  result, the most active devices are always in the table.
MAC  addresses can also be statically configured in the MAC address table,  and you can specify a maximum number of addresses allowed per port. One  advantage of static addresses is that less flooding occurs, both when  the switch first comes up and because of not aging out the addresses.  However, this also means that if a device is moved, the switch  configuration must be changed. A related feature available in some  switches is the capability to sticky-learn  addresses—the address is dynamically learned, as described earlier, but  is then automatically entered as a static command in the switch  configuration. Limiting the number of addresses per port to one and  statically configuring those addresses can ensure that only specific  devices are permitted access to the network; this feature is  particularly useful when addresses are sticky-learned.
 Layer 3 Switching
  The functions performed by routers (as described in the earlier “Routing”  section) can be CPU-intensive. Offloading the switching of the packet  to hardware can result in a significant increase in performance.
A  Layer 3 switch performs all the same functions as a router; the  differences are in the physical implementation of the device rather than  in the functions it performs. Therefore, functionally, the terms router and Layer 3 switch are synonymous.
Layer  4 switching is an extension of Layer 3 switching that includes  examination of the contents of the Layer 3 packet. For example, the  protocol number in the IP packet header (as described in the “IP Datagrams”  section) indicates which transport layer protocol (for example, TCP or  UDP) is being used, and the port number in the TCP or UDP segment  indicates the application being used (as described in the “TCP/IP Transport Layer Protocols” section). Switching based on the protocol  and port numbers can ensure, for example, that certain types of traffic  get higher priority on the network or take a specific path.
Within  Cisco switches, Layer 3 switching can be implemented in two different  ways—through multilayer switching or through Cisco Express Forwarding,  as described in Chapter 4.
Spanning Tree Protocol
The  following sections examine why such a protocol is needed in Layer 2  networks. STP terminology and operation are then introduced.
 Redundancy in Layer 2 Switched Networks
 Redundancy in a network, such as that shown in Figure 1-18,  is desirable so that communication can still take place if a link or  device fails. For example, if switch X in this figure stopped  functioning, devices A and B could still communicate through switch Y.  However, in a switched network, redundancy can cause problems.
The  first type of problem occurs if a broadcast frame is sent on the  network. For example, consider what happens when device A in Figure 1-18 sends an ARP request to find the MAC address of device  B. The ARP request is sent as a broadcast. Both switch X and switch Y  receive the broadcast; for now, consider just the one received by switch  X, on its port 1. Switch X floods the broadcast to all its other  connected ports; in this case, it floods it to port 2. Device B can see  the broadcast, but so can switch Y, on its port 2; switch Y floods the  broadcast to its port 1. This broadcast is received by switch X on its  port 1; switch X floods it to its port 2, and so forth. The broadcast  continues to loop around the network, consuming bandwidth and processing  power. This situation is called a broadcast storm. 
The  second problem that can occur in redundant topologies is that devices  can receive multiple copies of the same frame. For example, assume that  neither of the switches in Figure 1-18  has learned where device B is located. When device A sends data  destined for device B, switch X and switch Y both flood the data to the  lower LAN, and device B receives two copies of the same frame. This  might be a problem for device B, depending on what it is and how it is  programmed to handle such a situation.
The  third difficulty that can occur in a redundant situation is within the  switch itself—the MAC address table can change rapidly and contain wrong  information. Again referring to Figure 1-18,  consider what happens when neither switch has learned where device A or  B is located, and device A sends data to device B. Each switch learns  that device A is on its port 1, and each records this in its MAC address  table. Because the switches don’t yet know where device B is, they  flood the frame—in this case, on their port 2. Each switch then receives  the frame from the other switch on its port 2. This frame has device  A’s MAC address in the source address field; therefore, both switches  now learn that device A is on their port 2. As a result, the MAC address  table is overwritten. Not only does the MAC address table have  incorrect information (device A is actually connected to port 1, not  port 2, of both switches), but because the table changes rapidly, it  might be considered unstable.
To  overcome these problems, you must have a way to logically disable part  of the redundant network for regular traffic while maintaining  redundancy for the case when an error occurs. STP does just that.
 STP Terminology and Operation
 The following sections introduce the IEEE 802.1d STP terminology and operation.
STP Terminology
STP terminology can best be explained by examining how a sample network, such as the one shown in Figure 1-19, operates.
| Note |   
  |  
Within an STP network, one switch is elected as the root bridge—it  is at the root of the spanning tree. All other switches calculate their  best path to the root bridge. Their alternative paths are put in the  blocking state. These alternative paths are logically disabled from the  perspective of regular traffic, but the switches still communicate with  each other on these paths so that the alternative paths can be unblocked  in case an error occurs on the best path.
All  switches running STP (it is turned on by default in Cisco switches)  send out Bridge Protocol Data Units (BPDU). Switches running STP use  BPDUs to exchange information with neighboring switches. One of the  fields in the BPDU is the bridge identifier (ID); it comprises a 2-octet  bridge priority and a 6-octet MAC address. STP uses the bridge ID to  elect the root bridge—the switch with the lowest bridge ID is the root  bridge. If all bridge priorities are left at their default values, the  switch with the lowest MAC address therefore becomes the root bridge. In  Figure 1-19, switch Y is elected as the root bridge.
All the ports on the root bridge are called designated ports, and they are all in the forwarding state—that is, they can send and receive data. The STP states are described in the next section.
On all nonroot bridges, one port becomes the root port, and it is also in the forwarding state. The root port  is the one with the lowest cost to the root. The cost of each link is  by default inversely proportional to the link’s bandwidth, so the port  with the fastest total path from the switch to the root bridge is  selected as the root port on that switch. In Figure 1-19, port 1 on switch X is the root port for that switch because it is the fastest way to the root bridge.
Each  LAN segment must have one designated port. It is on the switch that has  the lowest cost to the root bridge (or, if the costs are equal, the  port on the switch with the lowest bridge ID is chosen), and it is in  the forwarding state. In Figure 1-19, the root bridge has designated ports on both segments, so no more are required.
| Note |   
  |   
All ports on a LAN segment that are not root ports or designated ports are called nondesignated ports and transition to the blocking state—they do not send data, so the redundant topology is logically disabled. In Figure 1-19, port 2 on switch X is the nondesignated port, and it is in the blocking state. Blocking ports do, however, listen for BPDUs.
If  a failure happens—for example, if a designated port or a root bridge  fails—the switches send topology change BPDUs and recalculate the  spanning tree. The new spanning tree does not include the failed port or  switch, and the ports that were previously blocking might now be in the  forwarding state. This is how STP supports the redundancy in a switched  network.
STP States
 Figure 1-20 illustrates the various STP port states.
 When  a port initially comes up, it is put in the blocking state, in which it  listens for BPDUs and then transitions to the listening state. A  blocking port in an operational network can also transition to the  listening state if it does not hear any BPDUs for the max-age time  (a default of 20 seconds). While in the listening state, the switch can  send and receive BPDUs but not data. The root bridge and the various  final states of all the ports are determined in this state.
If  the port is chosen as the root port on a switch, or as a designated  port on a segment, that port transitions to the learning state after the  listening state. In the learning state, the port still cannot send  data, but it can start to populate its MAC address table if any data is  received. The length of time spent in each of the listening and learning  states is dictated by the value of the forward-delay  parameter, which is 15 seconds by default. After the learning state,  the port transitions to the forwarding state, in which it can operate  normally. Alternatively, if in the listening state the port is not  chosen as a root port or designated port, it becomes a nondesignated  port and transitions back to the blocking state. 
 Several  features and enhancements to STP are implemented on Cisco switches to  help to reduce the convergence time—the time it takes for all the  switches in a network to agree on the network’s topology after that  topology has changed.
Rapid STP
Rapid  STP (RSTP) is defined by IEEE 802.1w. RSTP incorporates many of the  Cisco enhancements to STP, resulting in faster convergence. Switches in  an RSTP environment converge quickly by communicating with each other  and determining which links can forward, rather than just waiting for  the timers to transition the ports among the various states. RSTP ports  take on different roles than STP ports. The RSTP roles are root,  designated, alternate, backup, and disabled. RSTP port states are also  different from STP port states. The RSTP states are discarding,  learning, and forwarding. RSTP is compatible with STP. For example,  802.1w alternate and backup port states correspond to the 802.1d  blocking port state.
Virtual LANs
As  noted earlier, a broadcast domain includes all devices that receive  each others’ broadcasts (and multicasts). All the devices connected to  one router port are in the same broadcast domain. Routers block  broadcasts (destined for all networks) and multicasts by default; routers forward only unicast packets (destined for a specific device) and packets of a special type called directed broadcasts.  Typically, you think of a broadcast domain as being a physical wire, a  LAN. But a broadcast domain can also be a VLAN, a logical construct that  can include multiple physical LAN segments. 
 Figure 1-21  illustrates the VLAN concept. On the left side of the figure, three  individual physical LANs are shown, one each for Engineering,  Accounting, and Marketing. These LANs contain workstations—E1, E2, A1,  A2, M1, and M2—and servers—ES, AS, and MS. Instead of physical LANs, an  enterprise can use VLANs, as shown on the right side of the figure. With  VLANs, members of each department can be physically located anywhere,  yet still be logically connected with their own workgroup. Therefore, in  the VLAN configuration, all the devices attached to VLAN E  (Engineering) share the same broadcast domain, the devices attached to  VLAN A (Accounting) share a separate broadcast domain, and the devices  attached to VLAN M (Marketing) share a third broadcast domain. Figure 1-21  also illustrates how VLANs can span multiple switches; the link between  the two switches in the figure carries traffic from all three of the  VLANs and is called a trunk. 
 VLAN Membership
 Static  port membership means that the network administrator configures which  VLAN the port belongs to, regardless of the devices attached to it. This  means that after you have configured the ports, you must ensure that  the devices attaching to the switch are plugged into the correct port,  and if they move, you must reconfigure the switch.
Alternatively,  you can configure dynamic VLAN membership. Some static configuration is  still required, but this time, it is on a separate device called a VLAN Membership Policy Server (VMPS).  The VMPS could be a separate server, or it could be a higher-end switch  that contains the VMPS information. VMPS information consists of a MAC  address–to–VLAN map. As a result, ports are assigned to VLANs based on  the MAC address of the device connected to the port. When you move a  device from one port to another port (either on the same switch or on  another switch in the network), the switch dynamically assigns the new  port to the proper VLAN for that device by consulting the VMPS.
 Trunks
  As mentioned earlier, a port that carries data from multiple VLANs is called a trunk.  A trunk port can be on a switch, a router, or a server. A trunk port  can use one of two protocols: Inter-Switch Link (ISL) or IEEE 802.1Q.
ISL  is a Cisco-proprietary trunking protocol that involves encapsulating  the data frame between an ISL header and trailer. The header is 26 bytes  long; the trailer is a 4-byte cyclic redundancy check that is added  after the data frame. A 15-bit VLAN ID field is included in the header  to identify the VLAN that the traffic is for. (Only the lower 10 bits of  this field are used, thus supporting 1024 VLANs.)
The  802.1Q protocol is an IEEE standard protocol in which the trunking  information is encoded within a Tag field inserted inside the frame  header itself. Trunks using the 802.1Q protocol define a native VLAN.  Traffic for the native VLAN is not tagged; it is carried across the  trunk unchanged. Consequently, end-user stations that don’t understand  trunking can communicate with other devices directly over an 802.1Q  trunk as long as they are on the native VLAN. The native VLAN must be  defined to be the same VLAN on both sides of the trunk. Within the Tag  field, the 802.1Q VLAN ID field is 12 bits long, allowing up to 4096  VLANs to be defined. The Tag field also includes a 3-bit 802.1p user  priority field; these bits are used as class of service (CoS) bits for  QoS marking. (Chapter 4 describes QoS.)
The two types of trunks are not compatible with each other, so both ends of a trunk must be defined with the same trunk type.
| Note |   
  |   
 STP and VLANs
 Cisco  developed per-VLAN spanning tree (PVST) so that switches can have one  instance of STP running per VLAN, allowing redundant physical links  within the network to be used for different VLANs and thus reducing the  load on individual links. PVST is illustrated in Figure 1-22.
 The top diagram in Figure 1-22  shows the physical topology of the network, with switches X and Y  redundantly connected. In the lower-left diagram, switch Y has been  selected as the root bridge for VLAN A, leaving port 2 on switch X in  the blocking state. In contrast, the lower-right diagram shows that  switch X has been selected as the root bridge for VLAN B, leaving port 2  on switch Y in the blocking state. With this configuration, traffic is  shared across all links, with traffic for VLAN A traveling to the lower  LAN on switch Y’s port 2, whereas traffic for VLAN B traveling to the  lower LAN goes out switch X’s port 2.
PVST  works only over ISL trunks. However, Cisco extended this functionality  for 802.1Q trunks with the PVST+ protocol. Before this became available,  802.1Q trunks supported only Common Spanning Tree, with one instance of  STP running for all VLANs.
Multiple-Instance  STP (MISTP) is an IEEE standard (802.1s) that uses RSTP and allows  several VLANs to be grouped into a single spanning-tree instance. Each  instance is independent of the other instances so that a link can  forward for one group of VLANs while blocking for other VLANs. MISTP  therefore allows traffic to be shared across all the links in the  network, but it reduces the number of STP instances that would be  required if PVST/PVST+ were implemented.
Rapid per-VLAN Spanning Tree Plus (RPVST+) is a Cisco enhancement of RSTP, using PVST+.
 Inter-VLAN Routing
 A  Layer 3 device can be connected to a switched network in two ways: by  using multiple physical interfaces or through a single interface  configured as a trunk. These two connection methods are shown in Figure 1-23.  The diagram on the left illustrates a router with three physical  connections to the switch; each physical connection carries traffic from  only one VLAN.
The  diagram on the right illustrates a router with one physical connection  to the switch. The interfaces on the switch and the router have been  configured as trunks; therefore, multiple logical connections exist  between the two devices. When a router is connected to a switch through a  trunk, it is sometimes called a “router on a stick,” because it has  only one physical interface (a stick) to the switch.
Each  interface between the switch and the Layer 3 device, whether physical  interfaces or logical interfaces within a trunk, is in a separate VLAN  and therefore in a separate subnet for IP networks.
0 comments
Post a Comment