Overview of the SONA and Borderless Networks
  Proper   network architecture helps ensure that business strategies and IT   investments are aligned. As the backbone for IT communications, the   network element of enterprise architecture is increasingly critical.   Service-Oriented Network Architecture (SONA) is the Cisco architectural   approach to designing advanced network capabilities.
 Figure 1-11 illustrates SONA pictorially from a marketing perspective.
SONA   provides guidance, best practices, and blueprints for connecting   network services and applications to enable business solutions. The SONA   framework illustrates the concept that the network is the common   element that connects and enables all components of the IT   infrastructure. SONA outlines these three layers of intelligence in the   enterprise network:
-  
The Networked Infrastructure Layer: Where all the IT resources are interconnected across a converged network foundation. The IT resources include servers, storage, and clients. The network infrastructure layer represents how these resources exist in different places in the network, including the campus, branch, data center, WAN, metropolitan-area network (MAN), and telecommuter. The objective for customers in this layer is to have anywhere and anytime connectivity.
 -  
The Interactive Services Layer: Enables efficient allocation of resources to applications and business processes delivered through the networked infrastructure.
 -  
The Application Layer: Includes business applications and collaboration applications. The objective for customers in this layer is to meet business requirements and achieve efficiencies by leveraging the interactive services layer.
 
The   common thread that links the layers is SONA embeds application-level   intelligence into the network infrastructure elements so that the   network can recognize and better support applications and services.
Deploying a campus design based on the Cisco SONA framework yields several benefits:
-  
Convergence, virtualization, intelligence, security, and integration in all areas of the network infrastructure: The Cisco converged network encompasses all IT technologies, including computing, data, voice, video, and storage. The entire network now provides more intelligence for delivering all applications, including voice and video. Employees are more productive because they can use a consistent set of Unified Communications tools from almost anywhere in the world.
 -  
Cost savings: With the Cisco SONA model, the network offers the power and flexibility to implement new applications easily, which reduces development and implementation costs. Common network services are used on an as-needed basis by voice, data, and video applications.
 -  
Increased productivity: Collaboration services and product features enable employees to share multiple information types on a rich-media conferencing system. For example, agents in contact centers can share a Web browser with a customer during a voice call to speed up problem resolution and increase customer knowledge using a tool such as Cisco WebEX. Collaboration has enabled contact center agents to reduce the average time spent on each call, yet receive higher customer satisfaction ratings. Another example is cost saving associated with hosting virtual meetings using Cisco WebEx.
 -  
Faster deployment of new services and applications: Organizations can better deploy services for interactive communications through virtualization of storage, cloud computing, and other network resources. Automated processes for provisioning, monitoring, managing, and upgrading voice products and services help Cisco IT achieve greater network reliability and maximize the use of IT resources. Cloud computing is the next wave of new technology to be utilized in enterprise environments.
 -  
Enhanced business processes: With the SONA, IT departments can better support and enhance business processes and resilience through integrated applications and intelligent network services. Examples include change-control processes that enable 99.999 percent of network uptimes.
 
Keep   in mind, SONA is strictly a model to guide network designs. When   designing the campus portion of the enterprise network, you need to   understand SONA only from a high level as most of the focus of the   campus design is centered on features and functions of Cisco switching.
 Cisco.com contains additional information and readings on SONA for persons seeking more details.
In   October 2009, Cisco launched a new enterprise architecture called   Borderless Networks. As with SONA, the model behind Borderless Networks   enables businesses to transcend borders, access resources anywhere,   embrace business productivity, and lower business and IT costs. One   enhancement added to Borderless Networks over SONA is that the framework   focuses more on growing enterprises into global companies, noted in  the  term “borderless.” In terms of CCNP SWITCH, focus on a high-level   understanding of SONA because Borderless Networks is a new framework.   Consult Cisco.com for additional information on Borderless Networks.
In   review, SONA and Borderless Networks are marketing architectures that   form high-level frameworks for designing networks. For the purpose of   designing a campus network, focus on terms from building requirements   around traffic flow, scale, and general requirements. The next section   applies a life-cycle approach to campus design and delves into more   specific details about the campus designs.
Enterprise Campus Design
The   next subsections detail key enterprise campus design concepts. The   access, distribution, and core layers introduced earlier in this chapter   are expanded on with applied examples. Later subsections of this   chapter define a model for implementing and operating a network.
The   tasks of implementing and operating a network are two components of  the  Cisco Lifecycle model. In this model, the life of the network and  its  components are taught with a structural angle, starting from the   preparation of the network design to the optimization of the implemented   network. This structured approach is key to ensure that the network   always meets the requirements of the end users. This section describes   the Cisco Lifecycle approach and its impact on network implementation.
The   enterprise campus architecture can be applied at the campus scale, or   at the building scale, to allow flexibility in network design and   facilitate ease of implementation and troubleshooting. When applied to a   building, the Cisco Campus Architecture naturally divides networks  into  the building access, building distribution, and building core  layers,  as follows:
-  
Building access layer: This layer is used to grant user access to network devices. In a network campus, the building access layer generally incorporates switched LAN devices with ports that provide connectivity to workstations and servers. In the WAN environment, the building access layer at remote sites can provide access to the corporate network across WAN technology.
 -  
Building distribution layer: Aggregates the wiring closets and uses switches to segment workgroups and isolate network problems.
 -  
Building core layer: Also known as the campus backbone, this is a high-speed backbone designed to switch packets as fast as possible. Because the core is critical for connectivity, it must provide a high level of availability and adapt to changes quickly.
 
 Figure 1-12 illustrates a sample enterprise network topology that spans multiple buildings.
The   enterprise campus architecture divides the enterprise network into   physical, logical, and functional areas. These areas enable network   designers and engineers to associate specific network functionality on   equipment based upon its placement and function in the model.
 Access Layer In-Depth
  The   building access layer aggregates end users and provides uplinks to the   distribution layer. With the proper use of Cisco switches, the access   layer may contain the following benefits:
-  
High availability: The access layer is supported by many hardware and software features. System-level redundancy using redundant supervisor engines and redundant power supplies for critical user groups is an available option within the Cisco switch portfolio. Moreover, additional software features of Cisco switches offer access to default gateway redundancy using dual connections from access switches to redundant distribution layer switches that use first-hop redundancy protocols (FHRP) such as the hot standby routing protocol (HSRP). Of note, FHRP and HSRP features are supported only on Layer 3 switches; Layer 2 switches do not participate in HSRP and FHRP and forwarding respective frames.
 -  
Convergence: Cisco switches deployed in an access layer optionally support inline Power over Ethernet (PoE) for IP telephony and wireless access points, enabling customers to converge voice onto their data network and providing roaming WLAN access for users.
 -  
Security: Cisco switches used in an access layer optionally provide services for additional security against unauthorized access to the network through the use of tools such as port security, DHCP snooping, Dynamic Address Resolution Protocol (ARP) Inspection, and IP Source Guard. These features are discussed in later chapters of this book.
 
 Figure 1-13 illustrates the use of access layer deploying redundant upstream connections to the distribution layer.
 Distribution Layer
 Availability,   fast path recovery, load balancing, and QoS are the important   considerations at the distribution layer. High availability is typically   provided through dual paths from the distribution layer to the core,   and from the access layer to the distribution layer. Layer 3 equal-cost   load sharing enables both uplinks from the distribution to the core   layer to be utilized.
The   distribution layer is the place where routing and packet manipulation   are performed and can be a routing boundary between the access and core   layers. The distribution layer represents a redistribution point  between  routing domains or the demarcation between static and dynamic  routing  protocols. The distribution layer performs tasks such as   controlled-routing decision making and filtering to implement   policy-based connectivity and QoS. To improve routing protocol   performance further, the distribution layer summarizes routes from the   access layer. For some networks, the distribution layer offers a default   route to access layer routers and runs dynamic routing protocols when   communicating with core routers.
 The   distribution layer uses a combination of Layer 2 and multilayer   switching to segment workgroups and isolate network problems, preventing   them from affecting the core layer. The distribution layer is commonly   used to terminate VLANs from access layer switches. The distribution   layer connects network services to the access layer and implements   policies for QoS, security, traffic loading, and routing. The   distribution layer provides default gateway redundancy by using an FHRP   such as HSRP, Gateway Load Balancing Protocol (GLBP), or Virtual Router   Redundancy Protocol (VRRP) to allow for the failure or removal of one  of  the distribution nodes without affecting endpoint connectivity to  the  default gateway.
In review, the distribution layer provides the following enhancements to the campus network design:
-  
Aggregates access layer switches
 -  
Segments the access layer for simplicity
 -  
Summarizes routing to access layer
 -  
Always dual-connected to upstream core layer
 -  
Optionally applies packet filtering, security features, and QoS features
 
 Figure 1-14 illustrates the distribution layer interconnecting several access layer switches.
 Core Layer
  The   core layer is the backbone for campus connectivity and is the   aggregation point for the other layers and modules in the enterprise   network. The core must provide a high level of redundancy and adapt to   changes quickly. Core devices are most reliable when they can   accommodate failures by rerouting traffic and can respond quickly to   changes in the network topology. The core devices must be able to   implement scalable protocols and technologies, alternative paths, and   load balancing. The core layer helps in scalability during future   growth.
The   core should be a high-speed, Layer 3 switching environment utilizing   hardware-accelerated services in terms of 10 Gigabit Ethernet. For fast   convergence around a link or node failure, the core uses redundant   point-to-point Layer 3 interconnections in the core because this design   yields the fastest and most deterministic convergence results. The core   layer should not perform any packet manipulation in software, such as   checking access-lists and filtering, which would slow down the  switching  of packets. Catalyst and Nexus switches support access lists  and  filtering without effecting switching performance by supporting  these  features in the hardware switch path.
 Figure 1-15 depicts the core layer aggregating multiple distribution layer switches and subsequently access layer switches.
In review, the core layer provides the following functions to the campus and enterprise network:
-  
Aggregates multiple distribution switches in the distribution layer with the remainder of the enterprise network
 -  
Provides the aggregation points with redundancy through fast convergence and high availability
 -  
Designed to scale as the distribution and consequently the access layer scale with future growth
 
The Need for a Core Layer
 Without   a core layer, the distribution layer switches need to be fully meshed.   This design is difficult to scale and increases the cabling  requirements  because each new building distribution switch needs  full-mesh  connectivity to all the distribution switches. This full-mesh   connectivity requires a significant amount of cabling for each   distribution switch. The routing complexity of a full-mesh design also   increases as you add new neighbors.
In Figure 1-16,   the distribution module in the second building of two interconnected   switches requires four additional links for full-mesh connectivity to   the first module. A third distribution module to support the third   building would require eight additional links to support connections to   all the distribution switches, or a total of 12 links. A fourth module   supporting the fourth building would require 12 new links for a total  of  24 links between the distribution switches. Four distribution  modules  impose eight interior gateway protocol (IGP) neighbors on each   distribution switch.
As   a recommended practice, deploy a dedicated campus core layer to  connect  three or more physical segments, such as building in the  enterprise  campus or four or more pairs of building distribution  switches in a  large campus. The campus core helps make scaling the  network easier when  using Cisco switches with the following properties:
-  
10-Gigabit and 1-Gigabit density to scale
 -  
Seamless data, voice, and video integration
 -  
LAN convergence optionally with additional WAN and MAN convergence
 
Campus Core Layer as the Enterprise Network Backbone
 The   core layer is the backbone for campus connectivity and optionally the   aggregation point for the other layers and modules in the enterprise   campus architecture. The core provides a high level of redundancy and   can adapt to changes quickly. Core devices are most reliable when they   can accommodate failures by rerouting traffic and can respond quickly to   changes in the network topology. The core devices implement scalable   protocols and technologies, alternative paths, and load balancing. The   core layer helps in scalability during future growth. The core layer   simplifies the organization of network device interconnections. This   simplification also reduces the complexity of routing between physical   segments such as floors and between buildings.
 Figure 1-17   illustrates the core layer as a backbone interconnecting the data   center and Internet edge portions of the enterprise network. Beyond its   logical position in the enterprise network architecture, the core layer   constituents and functions depend on the size and type of the network.   Not all campus implementations require a campus core. Optionally,  campus  designs can combine the core and distribution layer functions at  the  distribution layer for a smaller topology. The next section  discusses  one such example.
 Small Campus Network Example
 A   small campus network or large branch network is defined as a network  of  fewer than 200 end devices, whereas the network servers and   workstations might be physically connected to the same wiring closet.   Switches in small campus network design might not require high-end   switching performance or future scaling capability.
In   many cases with a network of less than 200 end devices, the core and   distribution layers can be combined into a single layer. This design   limits scale to a few access layer switches for cost purposes. Low-end   multilayer switches such as the Cisco Catalyst 3560E optionally provide   routing services closer to the end user when there are multiple VLANs.   For a small office, one low-end multilayer switch such as the Cisco   Catalyst 2960G might support the Layer 2 LAN access requirements for the   entire office, whereas a router such as the Cisco 1900 or 2900 might   interconnect the office to the branch/WAN portion of a larger enterprise   network.
 Figure 1-17   depicts a sample small campus network with campus backbone that   interconnects the data center. In this example, the backbone could be   deployed with Catalyst 3560E switches, and the access layer and data   center could utilize the Catalyst 2960G switches with limited future   scalability and limited high availability.
 Medium Campus Network Example
 For   a medium-sized campus with 200 to 1000 end devices, the network   infrastructure is typically using access layer switches with uplinks to   the distribution multilayer switches that can support the performance   requirements of a medium-sized campus network. If redundancy is   required, you can attach redundant multilayer switches to the building   access switches to provide full link redundancy. In the medium-sized   campus network, it is best practice to use at least a Catalyst 4500   series or Catalyst 6500 family of switches because they offer high   availability, security, and performance characteristics not found in the   Catalyst 3000 and 2000 family of switches.
 Figure 1-18   shows a sample medium campus network topology. The example depicts   physical distribution segments as buildings. However, physical   distribution segments might be floors, racks, and so on.
 Large Campus Network Design
 Large   campus networks are any installation of more than 2000 end users.   Because there is no upper bound to the size of a large campus, the   design might incorporate many scaling technologies throughout the   enterprise. Specifically, in the campus network, the designs generally   adhere to the access, distribution, and core layers discussed in earlier   sections. Figure 1-17 illustrates a sample large campus network scaled for size in this publication.
Large   campus networks strictly follow Cisco best practices for design. The   best practices listed in this chapter, such as following the   hierarchical model, deploying Layer 3 switches, and utilizing the   Catalyst 6500 and Nexus 7000 switches in the design, scratch only the   surface of features required to support such a scale. Many of these   features are still used in small and medium-sized campus networks but   not to the scale of large campus networks.
Moreover,   because large campus networks require more persons to design,   implement, and maintain the environment, the distribution of work is   generally segmented. The sections of the enterprise network previously   mentioned in this chapter, campus, data center, branch/WAN and Internet   edge, are the first-level division of work among network engineers in   large campus networks. Later chapters discuss many of the features that   might be optionally for smaller campuses that become requirements for   larger networks. In addition, large campus networks require a sound   design and implementation plans. Design and implementation plans are   discussed in upcoming sections of this chapter.
 Data Center Infrastructure
  The   data center design as part of the enterprise network is based on a   layered approach to improve scalability, performance, flexibility,   resiliency, and maintenance. There are three layers of the data center   design:
-  
Core layer: Provides a high-speed packet switching backplane for all flows going in and out of the data center.
 -  
Aggregation layer: Provides important functions, such as service module integration, Layer 2 domain definitions, spanning tree processing, and default gateway redundancy.
 -  
Access layer: Connects servers physically to the network.
 
Multitier   HTTP-based applications supporting web, application, and database  tiers  of servers dominate the multitier data center model. The access  layer  network infrastructure can support both Layer 2 and Layer 3  topologies,  and Layer 2 adjacency requirements fulfilling the various  server  broadcast domain or administrative requirements. Layer 2 in the  access  layer is more prevalent in the data center because some  applications  support low-latency via Layer 2 domains. Most servers in  the data center  consist of single and dual attached one rack unit (RU)  servers, blade  servers with integrated switches, blade servers with  pass-through  cabling, clustered servers, and mainframes with a mix of   oversubscription requirements. Figure 1-19 illustrates a sample data center topology at a high level.
 Multiple   aggregation modules in the aggregation layer support connectivity   scaling from the access layer. The aggregation layer supports integrated   service modules providing services such as security, load balancing,   content switching, firewall, SSL offload, intrusion detection, and   network analysis.
As   previously noted, this book focuses on the campus network design of  the  enterprise network exclusive to data center design. However, most  of  the topics present in this text overlap with topics applicable to  data  center design, such as the use of VLANs. Data center designs  differ in  approach and requirements. For the purpose of CCNP SWITCH,  focus  primarily on campus network design concepts.
The   next section discusses a lifecycle approach to network design. This   section does not cover specific campus or switching technologies but   rather a best-practice approach to design. Some readers might opt to   skip this section because of its lack of technical content; however, it   is an important section for CCNP SWITCH and practical deployments.
PPDIOO Lifecycle Approach to Network Design and Implementation
 PPDIOO   stands for Prepare, Plan, Design, Implement, Operate, and Optimize.   PPDIOO is a Cisco methodology that defines the continuous life-cycle of   services required for a network.
 PPDIOO Phases
 The PPDIOO phases are as follows:
-  
Prepare: Involves establishing the organizational requirements, developing a network strategy, and proposing a high-level conceptual architecture identifying technologies that can best support the architecture. The prepare phase can establish a financial justification for network strategy by assessing the business case for the proposed architecture.
 -  
Plan: Involves identifying initial network requirements based on goals, facilities, user needs, and so on. The plan phase involves characterizing sites and assessing any existing networks and performing a gap analysis to determine whether the existing system infrastructure, sites, and the operational environment can support the proposed system. A project plan is useful for helping manage the tasks, responsibilities, critical milestones, and resources required to implement changes to the network. The project plan should align with the scope, cost, and resource parameters established in the original business requirements.
 -  
Design: The initial requirements that were derived in the planning phase drive the activities of the network design specialists. The network design specification is a comprehensive detailed design that meets current business and technical requirements, and incorporates specifications to support availability, reliability, security, scalability, and performance. The design specification is the basis for the implementation activities.
 -  
Implement: The network is built or additional components are incorporated according to the design specifications, with the goal of integrating devices without disrupting the existing network or creating points of vulnerability.
 -  
Operate: Operation is the final test of the appropriateness of the design. The operational phase involves maintaining network health through day-to-day operations, including maintaining high availability and reducing expenses. The fault detection, correction, and performance monitoring that occur in daily operations provide the initial data for the optimization phase.
 -  
Optimize: Involves proactive management of the network. The goal of proactive management is to identify and resolve issues before they affect the organization. Reactive fault detection and correction (troubleshooting) is needed when proactive management cannot predict and mitigate failures. In the PPDIOO process, the optimization phase can prompt a network redesign if too many network problems and errors arise, if performance does not meet expectations, or if new applications are identified to support organizational and technical requirements.
 
| Note |   
  |  
Benefits of a Lifecycle Approach
The   network lifecycle approach provides several key benefits aside from   keeping the design process organized. The main documented reasons for   applying a lifecycle approach to campus design are as follows:
-  
Lowering the total cost of network ownership
 -  
Increasing network availability
 -  
Improving business agility
 -  
Speeding access to applications and services
 
The   total cost of network ownership is especially important into today’s   business climate. Lower costs associated with IT expenses are being   aggressively assessed by enterprise executives. Nevertheless, a proper   network lifecycle approach aids in lowering costs by these actions:
-  
Identifying and validating technology requirements
 -  
Planning for infrastructure changes and resource requirements
 -  
Developing a sound network design aligned with technical requirements and business goals
 -  
Accelerating successful implementation
 -  
Improving the efficiency of your network and of the staff supporting it
 -  
Reducing operating expenses by improving the efficiency of operational processes and tools
 
Network   availability has always been a top priority of enterprises. However,   network downtime can result in a loss of revenue. Examples of where   downtime could cause loss of revenue is with network outages that   prevent market trading during a surprise interest rate cut or the   inability to process credit card transactions on black Friday, the   shopping day following Thanksgiving. The network lifecycle improves high   availability of networks by these actions:
-  
Assessing the network’s security state and its capability to support the proposed design
 -  
Specifying the correct set of hardware and software releases, and keeping them operational and current
 -  
Producing a sound operations design and validating network operations
 -  
Staging and testing the proposed system before deployment
 -  
Improving staff skills
 -  
Proactively monitoring the system and assessing availability trends and alerts
 -  
Proactively identifying security breaches and defining remediation plans
 
Enterprises   need to react quickly to changes in the economy. Enterprises that   execute quickly gain competitive advantages over other businesses.   Nevertheless, the network lifecycle gains business agility by the   following actions:
-  
Establishing business requirements and technology strategies
 -  
Readying sites to support the system that you want to implement
 -  
Integrating technical requirements and business goals into a detailed design and demonstrating that the network is functioning as specified
 -  
Expertly installing, configuring, and integrating system components
 -  
Continually enhancing performance
 
Accessibility   to network applications and services is critical to a productive   environment. As such, the network lifecycle accelerates access to   network applications and services by the following actions:
-  
Assessing and improving operational preparedness to support current and planned network technologies and services
 -  
Improving service-delivery efficiency and effectiveness by increasing availability, resource capacity, and performance
 -  
Improving the availability, reliability, and stability of the network and the applications running on it
 -  
Managing and resolving problems affecting your system and keeping software applications current
 
| Note |   
  |   
 Planning a Network Implementation
 The   more detailed the implementation plan documentation is, the more  likely  the implementation will be a success. Although complex  implementation  steps usually require the designer to carry out the  implementation,  other staff members can complete well-documented  detailed implementation  steps without the direct involvement of the  designer. In practical  terms, most large enterprise design engineers  rarely perform the  hands-on steps of deploying the new design. Instead,  network operations  or implementation engineers are often the persons  deploying a new design  based on an implementation plan.
 Moreover,   when implementing a design, you must consider the possibility of a   failure, even after a successful pilot or prototype network test. You   need a well-defined, but simple, process test at every step and a   procedure to revert to the original setup in case there is a problem.
| Note |   
  |  
Implementation Components
Implementation   of a network design consists of several phases (install hardware,   configure systems, launch into production, and so on). Each phase   consists of several steps, and each step should contain, but be not   limited to, the following documentation:
-  
Description of the step
 -  
Reference to design documents
 -  
Detailed implementation guidelines
 -  
Detailed roll-back guidelines in case of failure
 -  
Estimated time needed for implementation
 
Summary Implementation Plan
 Table 1-3   provides an example of an implementation plan for migrating users to   new campus switches. Implementations can vary significantly between   enterprises. The look and feel of your actual implementation plan can   vary to meet the requirements of your organization.
Each   step for each phase in the implementation phase is described briefly,   with references to the detailed implementation plan for further  details.  The detailed implementation plan section should describe the  precise  steps necessary to complete the phase.
Detailed Implementation Plan
A   detailed implementation plan describes the exact steps necessary to   complete the implementation phase. It is necessary to includes steps to   verify and check the work of the engineers implementing the plan. The   following list illustrates a sample network implementation plan:
Section 6.2.4.6, “Configure Layer 2 features such as VLAN, STP, and QoS on new campus switches”
-  
Number of switches involved: 8
 -  
Refer to Section 1.1 for physical port mapping to VLAN
 -  
Use configuration template from Section 4.2.3 for VLAN configuration
 -  
Refer to Section 1.2 for physical port mapping to spanning-tree configuration
 -  
Use configuration template from Section 4.2.4 for spanning-tree configuration
 -  
Refer to Section 1.3 for physical port mapping to QoS configuration
 -  
Use configuration template from Section 4.2.5 for QoS configuration
 -  
Estimate configuration time to be 30 minutes per switch
 -  
Verify configuration preferable by another engineer
 
This   section highlighted the key concepts around PPDIOO. Although this  topic  is not a technical one, the best practices highlighted will go a  long  way with any network design and implementation plan. Poor plans  will  always yield poor results. Today’s networks are too critical for   business operations not to plan effectively. As such, reviewing and   utilizing the Cisco Lifecycle will increase the likelihood of any   network implementation.
0 comments
Post a Comment