| 0 comments ]

Developing an Optimum Design for Layer 3

Add a note hereTo achieve high availability and fast convergence in the Cisco enterprise campus network, the designer needs to manage multiple objectives, including the following:

  • Add a note hereManaging oversubscription and bandwidth

  • Add a note hereSupporting link load balancing

  • Add a note hereRouting protocol design

  • Add a note hereFHRPs

Add a note hereThis section reviews design models and recommended practices for high availability and fast convergence in Layer 3 of the Cisco enterprise campus network.


Managing Oversubscription and Bandwidth

Add a note hereTypical campus networks are designed with oversubscription, as illustrated in Figure 2-10. The rule-of-thumb recommendation for data oversubscription is 20:1 for access ports on the access-to-distribution uplink. The recommendation is 4:1 for the distribution-to-core links. When you use these oversubscription ratios, congestion may occur infrequently on the uplinks. QoS is needed for these occasions. If congestion is frequently occurring, the design does not have sufficient uplink bandwidth.

Click to collapse
Add a note hereFigure 2-10: Managing Oversubscription and Bandwidth

Add a note hereAs access layer bandwidth capacity increases to 1 Gb/s, multiples of 1 Gb/s, and even 10 Gb/s, the bandwidth aggregation on the distribution-to-core uplinks might be supported on many Gigabit Ethernet EtherChannels, on 10 Gigabit Ethernet links, and on 10 Gigabit EtherChannels.

Add a note here Bandwidth Management with EtherChannel

Add a note hereAs bandwidth from the distribution layer to the core increases, oversubscription to the access layer must be managed, and some design decisions must be made.

Add a note hereJust adding more uplinks between the distribution and core layers leads to more peer relationships, with an increase in associated overhead.

Add a note here EtherChannels can reduce the number of peers by creating single logical interface. However, you must consider some issues about how routing protocols will react to single link failure:

  • Add a note here OSPF running on a Cisco IOS Software-based switch will notice a failed link, and will increase the link cost. Traffic is rerouted, and this design leads to a convergence event.

  • Add a note hereOSPF running on a Cisco Hybrid-based switch will not change link cost. Because it will continue to use the EtherChannel, this may lead to an overload in the remaining links in the bundle as OSPF continues to divide traffic equally across channels with different bandwidths.

  • Add a note hereEIGRP might not change link cost, because the protocol looks at the end-to-end cost. This design might also overload remaining links.

Add a note hereThe EtherChannel Min-Links feature is supported on LACP EtherChannels. This feature allows you to configure the minimum number of member ports that must be in the link-up state and bundled in the EtherChannel for the port channel interface to transition to the link-up state. You can use the EtherChannel Min-Links feature to prevent low-bandwidth LACP EtherChannels from becoming active.

Add a note here Bandwidth Management with 10 Gigabit Interfaces

Add a note hereUpgrading the uplinks between the distribution and core layers to 10 Gigabit Ethernet links is an alternative design for managing bandwidth. The 10 Gigabit Ethernet links can also support the increased bandwidth requirements.

Add a note hereThis is a recommended design:

  • Add a note hereUnlike the multiple link solution, 10 Gigabit Ethernet links do not increase routing complexity. The number of routing peers is not increased.

  • Add a note hereUnlike the EtherChannel solution, the routing protocols will have the ability to deterministically select the best path between the distribution and core layer.


Link Load Balancing

Add a note hereIn Figure 2-11, many equal-cost, redundant paths are provided in the recommended network topology from one access switch to the other across the distribution and core switches. From the perspective of the access layer, there are at least three sets of equal-cost, redundant links to cross to reach another building block, such as the data center.

Image from book
Add a note hereFigure 2-11: CEF Load Balancing (Default Behavior)

Add a note hereCisco Express Forwarding (CEF) is a deterministic algorithm. As shown in the figure, when packets traverse the network that all use the same input value to the CEF hash, a “go to the right” or “go to the left” decision is made for each redundant path. When this results in some redundant links that are ignored or underutilized, the network is said to be experiencing CEF polarization.

Add a note hereTo avoid CEF polarization, you can tune the input into the CEF algorithm across the layers in the network. The default input hash value is Layer 3 for source and destination. If you change this input value to Layer 3 plus Layer 4, the output hash value also changes.

Add a note hereAs a recommendation, use alternating hashes in the core and distribution layer switches:

  • Add a note here In the core layer, continue to use the default, which is based on only Layer 3 information.

  • Add a note hereIn the distribution layer, use the Layer 3 plus Layer 4 information as input into the CEF hashing algorithm with the command Dist2-6500 (config)#mls ip cef load-sharing full.

Add a note hereThis alternating approach helps eliminate the always-right or always-left biased decisions and helps balance the traffic over equal-cost, redundant links in the network.

Add a note here Link Load Balancing

Add a note hereEtherChannel allows load sharing of traffic among the links in the channel and redundancy in the event that one or more links in the channel fail.

Add a note hereYou can tune the hashing algorithm used to select the specific EtherChannel link on which a packet is transmitted. You can use the default Layer 3 source and destination information, or you can add a level of load balancing to the process by adding the Layer 4 TCP/IP port information as an input to the algorithm.

Add a note here Figure 2-12 illustrates some results from experiments at Cisco in a test environment using a typical IP addressing scheme of one subnet per VLAN, two VLANs per access switch, and the RFC 1918 private address space. The default Layer 3 hash algorithm provided about one-third to two-thirds utilization. When the algorithm was changed to include Layer 4 information, nearly full utilization was achieved with the same topology and traffic pattern.

Image from book
Add a note hereFigure 2-12: EtherChannel Load Balancing

Add a note here The recommended practice is to use Layer 3 plus Layer 4 load balancing to provide as much information as possible for input to the EtherChannel algorithm to achieve the best or most uniform utilization of EtherChannel members. The command port-channel load-balance is used to present the more unique values to the hashing algorithm. This can be achieved using the command dist1-6500(config)#port-channel load-balance src-dst-port.

Add a note hereTo achieve the best load balancing, use two, four, or eight ports in the port channel.

Routing Protocol Design

Add a note hereThis section reviews design recommendations for routing protocols in the enterprise campus.

Add a note hereRouting protocols are typically deployed across the distribution-to-core and core-to-core interconnections.

Add a note hereLayer 3 routing design can be used in the access layer, too, but this design is currently not as common.

Add a note hereLayer 3 routing protocols are used to quickly reroute around failed nodes or links while providing load balancing over redundant paths.

Add a note here Build Redundant Triangles

Add a note hereFor optimum distribution-to-core layer convergence, build redundant triangles, not squares, to take advantage of equal-cost, redundant paths for the best deterministic convergence.

Add a note hereThe topology connecting the distribution and core switches should be built using triangles, with equal-cost paths to all redundant nodes. The triangle design is shown in Figure 2-13 Model A, and uses dual equal-cost paths to avoid timer-based, nondeterministic convergence. Instead of indirect neighbor or route-loss detection using hellos and dead timers, the triangle design failover is hardware based and relies on physical link loss to mark a path as unusable and reroute all traffic to the alternate equal-cost path. There is no need for OSPF or EIGRP to recalculate a new path.

Click to collapse
Add a note hereFigure 2-13: Build Redundant Triangles

Add a note here In contrast, the square topology shown in Figure 2-14 Model B requires routing protocol convergence to fail over to an alternate path in the event of a link or node failure. It is possible to build a topology that does not rely on equal-cost, redundant paths to compensate for limited physical fiber connectivity or to reduce cost. However, with this design, it is not possible to achieve the same deterministic convergence in the event of a link or node failure, and for this reason the design will not be optimized for high availability.

Image from book
Add a note hereFigure 2-14: Use Passive Interfaces at the Access Layer

Add a note here Peer Only on Transit Links

Add a note here Another recommended practice is to limit unnecessary peering across the access layer by peering only on transit links.

Add a note hereBy default, the distribution layer switches send routing updates and attempt to peer across the uplinks from the access switches to the remote distribution switches on every VLAN. This is unnecessary and wastes CPU processing time.

Add a note here Figure 2-14 shows an example network where with 4 VLANs per access switch and 3 access switches, 12 unnecessary adjacencies are formed. Only the adjacency on the link between the distribution switches is needed. This redundant Layer 3 peering has no benefit from a high-availability perspective, and only adds load in terms of memory, routing protocol update overhead, and complexity. In addition, in the event of a link failure, it is possible for traffic to transit through a neighboring access layer switch, which is not desirable.

Add a note hereAs a recommended practice, limit unnecessary routing peer adjacencies by configuring the ports toward Layer 2 access switches as passive, which will suppress the advertising of routing updates. If a distribution switch does not receive routing updates from a potential peer on a specific interface, it will not need to process these updates, and it will not form a neighbor adjacency with the potential peer across that interface.

Add a note hereThere are two approaches to configuring passive interfaces for the access switches:

  • Add a note hereUse the passive-interface default command, and selectively use the no passive-interface command to enable a neighboring relationship where peering is desired.

  • Add a note hereUse the passive-interface command to selectively make specific interfaces passive.

    Add a note herePassive interface configuration example for OSPF:

    Add a note hereAGG1(config)#router ospf 1
    AGG1(config-router)#passive-interface Vlan 99
    ! Or
    AGG1(config)#router ospf 1
    AGG1(config-router)#passive-interface default
    AGG1(config-router)#no passive-interface Vlan 99

    Add a note herePassive interface configuration example for EIGRP:

    Add a note hereAGG1(config)#router EIGRP 1
    AGG1(config-router)#passive-interface Vlan 99
    ! Or
    AGG1(config)#router EIGRP 1
    AGG1(config-router)#passive-interface default
    AGG1(config-router)#no passive-interface Vlan 99

Add a note hereYou should use whichever technique requires the fewest lines of configuration or is the easiest for you to manage.

Add a note here Summarize at the Distribution Layer

Add a note hereA hierarchical routing design reduces routing update traffic and avoids unnecessary routing computations. Such a hierarchy is achieved through allocating IP networks in contiguous blocks that can be easily summarized by a dynamic routing protocol.

Add a note hereIt is a recommended practice to configure route summarization at the distribution layer to advertise a single summary route to represent multiple IP networks within the building (switch block). As a result, fewer routes will be advertised through the core layer and subsequently to the distribution layer switches in other buildings (switch blocks). If the routing information is not summarized toward the core, EIGRP and OSPF require interaction with a potentially large number of peers to converge around a failed node.

Add a note here Summarization at the distribution layer optimizes the rerouting process. If a link to an access layer device goes down, return traffic at the distribution layer to that device is dropped until the IGP converges. When summarization is used and the distribution nodes send summarized information toward the core, an individual distribution node does not advertise loss of connectivity to a single VLAN or subnet. This means that the core does not know that it cannot send traffic to the distribution switch where the access link has failed. Summaries limit the number of peers that an EIGRP router must query or the number of link-state advertisements (LSA) that OSPF must process, and thereby speeds the rerouting process.

Add a note hereSummarization should be performed at the boundary where the distribution layer of each building connects to the core. The method for configuring route summarization varies, depending on the IGP being used. Route summarization is covered in detail in Chapter 3, “Developing an Optimum Design for Layer 3.” These designs require a Layer 3 link between the distribution switches, as shown in Figure 2-15, to allow the distribution node that loses connectivity to a given VLAN or subnet the ability to reroute traffic across the distribution-to-distribution link. To be effective, the address space selected for the distribution-to-distribution link must be within the address space being summarized.

Click to collapse
Add a note hereFigure 2-15: Summarize at the Distribution Layer

Add a note hereSummarization relies on a solid network addressing design.


0 comments

Post a Comment