Examining Operations Security
Operations security is concerned with the day-to-day practices necessary to first deploy and later maintain a secure system. This section examines these principles.
Secure Network Life Cycle Management
The responsibilities of the operations team pertain to everything that takes place to keep a network, computer systems, applications, and the environment up and running in a secure and protected manner. After the network is set up, the operation tasks begin, including the continual day-to-day maintenance of the environment. These activities are regular in nature and enable the environment, systems, and applications to continue to run correctly and securely.
Operations within a computing environment can pertain to software, personnel, and hardware, but an operations department often focuses only on the hardware and software aspects. Management is responsible for the behavior and responsibilities of employees. The people within operations are responsible for ensuring that systems are protected and that they continue to run in a predictable manner.
The operations team usually has the following objectives:
-
Preventing reoccurring problems
-
Reducing hardware failures to an acceptable level
-
Reducing the impact of hardware failure or disruption
This group should investigate any unusual or unexplained occurrences, unscheduled initial program loads, deviations from standards, or other odd or abnormal conditions that take place on the network.
Including security early in the information process, in the system design life cycle (SDLC), usually results in less-expensive and more-effective security when compared to adding it to an operational system.
A general SDLC includes five phases:
-
Initiation
-
Acquisition and development
-
Implementation
-
Operations and maintenance
-
Disposition
Each of these five phases includes a minimum set of security steps that you need to effectively incorporate security into a system during its development. An organization either uses the general SDLC or develops a tailored SDLC that meets their specific needs. In either case, the National Institute of Standards and Technology (NIST) recommends that organizations incorporate the associated IT security steps of this general SDLC into their development process.
Initiation Phase
The initiation phase of the SDLC includes the following:
-
Security categorization: This step defines three levels, such as low, moderate, and high, of potential impact on organizations or individuals should a breach of security occur (a loss of confidentiality, integrity, or availability). Security categorization standards help organizations make the appropriate selection of security controls for their information systems.
-
Preliminary risk assessment: This step results in an initial description of the basic security needs of the system. A preliminary risk assessment should define the threat environment in which the system will operate.
Acquisition and Development Phase
The acquisition and development phase of the SDLC includes the following:
-
Risk assessment: This step is an analysis that identifies the protection requirements for the system through a formal risk-assessment process. This analysis builds on the initial risk assessment that was performed during the initiation phase, but is more in depth and specific.
-
Security functional requirements analysis: This step is an analysis of requirements and can include the following components: system security environment, such as the enterprise information security policy and enterprise security architecture, and security functional requirements.
-
Security assurance requirements analysis: This step is an analysis of the requirements that address the developmental activities required and the assurance evidence needed to produce the desired level of confidence that the information security will work correctly and effectively. The analysis, based on legal and functional security requirements, is used as the basis for determining how much and what kinds of assurance are required.
-
Cost considerations and reporting: This step determines how much of the development cost you can attribute to information security over the life cycle of the system. These costs include hardware, software, personnel, and training.
-
Security planning: This step ensures that you fully document any agreed upon security controls, whether they are just planned or in place. The security plan also provides a complete characterization or description of the information system and attachments or references to key documents that support the information security program of the agency. Examples of documents that support the information security program include a configuration management plan, a contingency plan, an incident response plan, a security awareness and training plan, rules of behavior, a risk assessment, a security test and evaluation results, system interconnection agreements, security authorizations and accreditations, and a plan of action and milestones.
-
Security control development: This step ensures that the security controls that the respective security plans describe are designed, developed, and implemented. The security plans for information systems that are currently in operation may call for the development of additional security controls to supplement the controls that are already in place or the modification of selected controls that are deemed less than effective.
-
Developmental security test and evaluation: This ensures that security controls that you develop for a new information system are working properly and are effective. Some types of security controls, primarily those controls of a nontechnical nature, cannot be tested and evaluated until the information system is deployed. These controls are typically management and operational controls.
-
Other planning components: This step ensures that you consider all the necessary components of the development process when you incorporate security into the network life cycle. These components include the selection of the appropriate contract type, the participation by all the necessary functional groups within an organization, the participation by the certifier and accreditor, and the development and execution of the necessary contracting plans and processes.
Implementation Phase
The implementation phase of the SDLC includes the following:
-
Inspection and acceptance: This step ensures that the organization validates and verifies that the functionality that the specification describes is included in the deliverables.
-
System integration: This step ensures that the system is integrated at the operational site where you will deploy the information system for operation. You enable the security control settings and switches in accordance with the vendor instructions and the available security implementation guidance.
-
Security certification: This step ensures that you effectively implement the controls through established verification techniques and procedures. This step gives organization officials confidence that the appropriate safeguards and countermeasures are in place to protect the information system of the organization. Security certification also uncovers and describes the known vulnerabilities in the information system.
-
Security accreditation: This step provides the necessary security authorization of an information system to process, store, or transmit information that is required. This authorization is granted by a senior organization official and is based on the verified effectiveness of security controls to some agreed upon level of assurance and an identified residual risk to agency assets or operations.
Operations and Maintenance Phase
The operations and maintenance phase of the SDLC includes the following:
-
Configuration management and control: This step ensures that there is adequate consideration of the potential security impacts due to specific changes to an information system or its surrounding environment. Configuration management and configuration control procedures are critical to establishing an initial baseline of hardware, software, and firmware components for the information system and subsequently controlling and maintaining an accurate inventory of any changes to the system.
-
Continuous monitoring: This step ensures that controls continue to be effective in their application through periodic testing and evaluation. Security control monitoring, such as verifying the continued effectiveness of those controls over time, and reporting the security status of the information system to appropriate agency officials is an essential activity of a comprehensive information security program.
Disposition Phase
The disposition phase of the SDLC includes the following:
-
Information preservation: This step ensures that you retain information, as necessary, to conform to current legal requirements and to accommodate future technology changes that can render the retrieval method of the information obsolete.
-
Media sanitization: This step ensures that you delete, erase, and write over data as necessary.
-
Hardware and software disposal: This step ensures that you dispose of hardware and software as directed by the information system security officer.
Principles of Operations Security
Certain core principles are part of the secure operations that are intended for information systems security (infosec). The following are among these principles:
-
Separation of duties
-
Two-man control
-
Dual operator
-
-
Rotation of duties
-
Trusted recovery, which includes the following:
-
Failure preparation
-
System recovery
-
-
Change and configuration controls
Separation of Duties
Separation of duties (SoD) is one of the key concepts of internal control and is the most difficult and sometimes the most costly control to achieve. SoD states that no single individual should have control over two or more phases of a transaction or operation, which makes deliberate fraud more difficult to perpetrate because it requires the collusion of two or more individuals or parties.
The term SoD is already well-known in financial systems. Companies understand not to combine roles such as receiving checks, approving discounts, depositing cash and reconciling bank statements, approving time cards, and so on.
In information systems, segregation of duties helps to reduce the potential impact from the actions/inactions of one person. You should organize IT in a way that achieves adequate SoD.
Note | SoD is also known as segregation of duties. |
The two-man control principle uses two individuals to review and approve the work of the other. This principle provides accountability and reduces things such as fraud. Because of the obvious overhead involved, this practice is usually limited to sensitive duties considered potential security risks.
The dual-operator principle differs from the two-man control because the task involved actually requires two people. An example of the dual-operator principle is a check that requires two signatures for the bank to accept it, or the safety deposit bank where you have one key and the bank clerk has the second key.
Note | The dual-operator principle is a technical requirement, whereas two-man control is an administrative or policy decision. |
Rotation of Duties
Rotation of duties is sometimes called job rotation. To successfully implement this principle, it is important that individuals have the training necessary to complete more than one job. Peer review is usually included in the practice of this principle.
For example, suppose that a job-rotation scheme has five people rotating through five different roles during the course of a week. Peer review of work occurs whether or not it was intended. When five people do one job in the course of the week, each person is effectively reviewing the work of the others.
The most obvious benefit of this practice is the great strength and flexibility that would exist within a department because everyone is capable of doing all the jobs. Although the purpose for the practice is rooted in security, you gain an additional business benefit from this breadth of experience of the personnel.
Trusted Recovery
One of the easiest ways to compromise a system is to make the system restart and compromise it before all of its defenses can be reloaded. For this reason, trusted recovery is a principle of operations security. The trust recovery principle states that you must expect that systems and individuals will experience a failure at some time and you must prepare for this failure. Because you anticipate the failure, you can have a recovery plan for both systems and personnel available and implemented. The most common way to prepare for failure is to back up data on a regular basis.
Backing up data is a normal occurrence in most IT departments and is commonly performed by junior-level staff. However, this is not a very secure operation because backup software uses an account that can bypass file security to back up the files. Therefore, junior-level staff members have access to files that they would ordinarily not be able to access. The same is true if these same junior staff members have the right to restore data.
Security professionals propose that a secure backup program contain some of the following practices:
-
A junior staff member is responsible for loading blank media.
-
Backup software uses an account that is unknown to individuals to bypass file security.
-
A different staff member removes the backup media and securely stores it on site while being assisted by another member of the staff.
-
A separate copy of the backup is stored off site and is handled by a third staff member who is accompanied by another staff member.
Note | One of the easiest ways for attackers to get their hands on a password file (or any other data) is to get a copy of the backup tape, because the backup tape is not always handled or stored securely. |
Being prepared for system failure is an important part of operations security. The following are examples of things that help provide system recovery:
-
Operating systems and applications that have single-user or safe mode help with system recovery.
-
The ability to recover files that were open at the time of the problem helps ensure a smooth system recovery. The autosave process in many desktop applications is an example of this ability. A memory dump that many operating systems perform upon a system failure is also an example of this ability.
-
The ability to retain the security settings of a file after a system crash is critical so that the security is not bypassed by forcing a crash.
-
The ability to recover and retain security settings for critical key system files, such as the Registry, configuration files, password files, and so on, is critical for providing system recovery.
Change and Configuration Control
The goal of change and configuration controls is to ensure that you use standardized methods and procedures to efficiently handle all changes. To make changes efficient, you should minimize the impact of change-related incidents and improve the day-to-day operations.
A change is defined as an event that results in a new status of one or more configuration items. A change should be approved by management, be cost-effective, and be an enhancement to business processes with a minimum risk to the IT infrastructure and security.
The three major goals of change and configuration management are as follows:
-
Minimal system and network disruption
-
Preparation to reverse changes
-
More economic utilization of resources
To accomplish configuration changes in an effective and safe manor, adhere to the following suggestions:
-
Ensure that the change is implemented in an orderly manner with formalized testing.
-
Ensure that the end users are aware of the coming change (when necessary).
-
Analyze the effects of the change after it is implemented.
-
Reduce the potential negative impact on performance or security, or both.
Although the change control process differs from organization to organization, certain patterns emerge in change management. The following are steps in a typical change control process:
Step 1 | Apply to introduce the change. |
Step 2 | Catalogue the proposed change. |
Step 3 | Schedule the change. |
Step 4 | Implement the change. |
Step 5 | Report the change to relevant parties. |
Network Security Testing
Security testing provides insight into the other SDLC activities such as risk analysis and contingency planning. You should document security testing and make them available for staff involved in other IT and security related areas. Typically, you conduct network security testing during the implementation and operational stages, after the system has been developed, installed, and integrated.
During the implementation stage, you should conduct security testing and evaluation on specific parts of the system and on the entire system as a whole. Security test and evaluation (ST&E) is an examination or analysis of the protective measures that are placed on an information system after it is fully integrated and operational. The following are the objectives of the ST&E:
-
Uncover design, implementation, and operational flaws that could allow the violation of the security policy
-
Determine the adequacy of security mechanisms, assurances, and other properties to enforce the security policy
-
Assess the degree of consistency between the system documentation and its implementation
Once a system is operational, it is important to ascertain its operational status. You can conduct many tests to assess the operational status of the system. The types of tests you use and the frequency in which you conduct them depend on the importance of the system and the resources available for testing. You should repeat these tests periodically and whenever you make a major change to the system. For systems that are exposed to constant threat, such as web servers, or systems that protect critical information, such as firewalls, you should conduct tests more frequently.
Security Testing Techniques
You can use security testing results in the following ways:
-
As a reference point for corrective action
-
To define mitigation activities to address identified vulnerabilities
-
As a benchmark to trace the progress of an organization in meeting security requirements
-
To assess the implementation status of system security requirements
-
To conduct cost and benefit analysis for improvements to system security
-
To enhance other life cycle activities, such as risk assessments, certification and authorization (C&A), and performance-improvement efforts
There are several different types of security testing. Some testing techniques are predominantly manual, and other tests are highly automated. Regardless of the type of testing, the staff that sets up and conducts the security testing should have significant security and networking knowledge, including significant expertise in the following areas: network security, firewalls, IPSs, operating systems, programming, and networking protocols, such as TCP/IP.
Many testing techniques are available, including the following:
-
Network scanning
-
Vulnerability scanning
-
Password cracking
-
Log review
-
Integrity checkers
-
Virus detection
-
War dialing
-
War driving (802.11 or wireless LAN testing)
-
Penetration testing
Common Testing Tools
Many testing tools are available in the modern marketplace that you can use to test the security of your systems and networks. The following list is a collection of tools that are quite popular; some of the tools are freeware, some are not:
Note | Many other excellent tools exist. This list is only a representative sampling. |
Some testing tools are actually hacking tools. Why not try on your network tools before a hacker does? Find the weaknesses on your network before the hacker does and before he exploits them. So, look at two tools that are commonly used.
Nmap is the best-known low-level scanner available to the public. It is simple to use and has an array of excellent features that you can use for network mapping and reconnaissance. The basic functionality of Nmap enables the user to do the following:
-
Perform classic TCP and UDP port scanning (looking for different services on one host) and sweeping (looking for the same service on multiple systems)
-
Perform stealth port scans and sweeps, which are hard to detect by the target host or IPSs
-
Identify remote operating systems, known as OS fingerprinting, through its TCP idiosyncrasies
Advanced features of Nmap include protocol scanning, known as Layer 3 port scanning, which can identify Layer 3 protocol support on a host, such as generic routing encapsulation (GRE) support and Open Shortest Path First (OSPF) support, using decoy hosts on the same LAN to mask your identity.
Figure 1-18 shows a screen output of ZENMAP, the GUI for Nmap Security Scanner.
Tip | You can download the Nmap program from http://www.insecure.org/nmap. |
SuperScan Version 4 is an update of the highly popular Microsoft Windows port scanning tool, SuperScan. It runs only on Windows XP and Windows 2000 and requires administrator privileges to run. Windows XP SP2 has removed support for raw sockets, which limits the capability of SuperScan and other scanning tools. Some functionality can be restored by entering the net stop SharedAccess command at the Windows command prompt.
The following are some of the features of SuperScan Version 4:
-
Adjustable scanning speed
-
Support for unlimited IP ranges
-
Improved host detection using multiple ICMP methods
-
TCP SYN scanning
-
UDP scanning (two methods)
-
Source port scanning
-
Fast hostname resolving
-
Extensive banner grabbing
-
Massive built-in port list description database
-
IP and port scan order randomization
-
A selection of useful tools (ping, traceroute, Whois, and so on)
-
Extensive Windows host enumeration capability
Figure 1-19 shows a screen capture of SuperScan results.
Note | Visit http://www.remote-exploit.org/backtrack.html to download Backtrack 3, released in June 2008. Backtrack 3, a live CD, is packed with more than 300 security tools to test and secure your network. |
Disaster Recovery and Business Continuity Planning
Business continuity planning and disaster recovery procedures address the continuing operations of an organization in the event of a disaster or prolonged service interruption that affects the mission of the organization. Such plans should address an emergency response phase, a recovery phase, and a return to normal operation phase. You should identify the responsibilities of personnel and the available resources during an incident. In reality, contingency and disaster recovery plans do not address every possible scenario or assumption. Rather, they focus on the events most likely to occur and identify an acceptable method of recovery. Periodically, you should exercise the plans and procedures to ensure that they are effective and well understood.
Business continuity planning provides a short- to medium-term framework to continue the organizational operations. The following are objectives of business continuity planning:
-
Moving or relocating critical business components and people to a remote location while the original location is being repaired
-
Using different channels of communication to deal with customers, shareholders, and partners until operations return to normal
Disaster recovery is the process of regaining access to the data, hardware, and software necessary to resume critical business operations after a natural or human-induced disaster. A disaster recovery plan should also include plans for coping with the unexpected or sudden loss of key personnel. A disaster recovery plan is part of a larger process known as business continuity planning.
After the events of September 11, 2001, when many companies lost irreplaceable data, the effort put into protecting that data has changed. It is believed that some companies spend up to 25 percent of their IT budget on disaster recovery planning to avoid larger losses. Research indicates that of companies that had a major loss of computerized records, 43 percent never reopen, 51 percent close within two years, and only 6 percent survive long term.
Not all disruptions to business operations are equal. Whether the disruption is natural or human, intentional or unintentional, the effect is the same. A good disaster recovery plan takes into account the magnitude of the disruption, recognizing that there are differences between catastrophes, disasters, and nondisasters. In each case, a disruption occurs, but the scale of that disruption can dramatically differ.
Key Topic |
|
Generally, a nondisaster is a situation in which business operations are interrupted for a relatively short period of time. Disasters cause interruptions of at least a day, sometimes longer. The significant detail in a disaster is that the facilities are not 100 percent destroyed. In a catastrophe, the facilities are destroyed, and all operations must be moved.
The only way to deal with destruction is redundancy. When a component is destroyed, it must be replaced with a redundant component. When service is disrupted, it must be insured with a service level agreement (SLA) wherein some compensation is acquired for the disruption in service. And when a facility is destroyed, there must be a redundant facility. Without redundancy, it is impossible to recover from destruction.
Redundant facilities are referred to as hot, warm, and cold sites. Each of these is available for a different price, with different resulting downtimes.
In the case of a hot site, a completely redundant facility is acquired with almost identical equipment. The copying of data to this redundant facility is part of normal operations, so that in the case of a catastrophe, only the latest changes of data must be applied so that full operations are restored. With enough money spent in preparation for a catastrophe, this recovery can take as little as a few minutes or even seconds.
Tip | Organizations that need to respond in seconds often employ global load balancing (GLB) and distributed storage-area networks (SAN) to respond quickly. |
Warm sites are physically redundant facilities without the software and data standing by. Overnight replication would not occur in these instances, necessitating a disaster recovery team to physically go to the redundant facility and bring it up. Depending on how much software and how much data is involved, it can take days to resume operations.
A cold site is usually an empty data center with racks, power, WAN links, and heating, ventilation, and air conditioning (HVAC) already present, but no equipment. In this case, an organization would have to first acquire routers, switches, firewalls, servers, and so on to rebuild everything. Once you restore the backups to the new machines, operations can continue. This option is the least expensive in terms of money spent annually, but would usually take weeks to resume full operations.
Understanding and Developing a Comprehensive Network Security Policy
It is important to know that the security policy developed in your organization drives all the steps taken to secure network resources. The development of a comprehensive security policy prepares you for the rest of this course.
To create an effective security policy, it is necessary to also do a risk analysis to maximize the effectiveness of the policy. Also, it is essential that everyone be aware of the policy; otherwise, it is doomed to fail.
Security Policy Overview
Every organization has something that someone else wants. Someone might want that something for himself, or he might want the satisfaction of denying something to its rightful owner. Your assets are what need the protection of a security policy.
Determine what your assets are by asking (and answering) the following questions:
-
What do you have that others want?
-
What processes, data, or information systems are critical to you, your company, or your organization?
-
What would stop your company or organization from doing business or fulfilling its mission?
The answers identify assets ranging from critical databases, vital applications, vital company customer and employee information, classified commercial information, shared drives, email servers, and web servers.
A security policy is a set of objectives for the company, rules of behavior for users and administrators, and requirements for system and management that collectively ensure the security of network and computer systems in an organization. A security policy is a “living document,” meaning that the document is never finished and is continuously updated as technology and employee requirements change.
The security policy translates, clarifies, and communicates the management position on security as defined in high-level security principles. The security policies act as a bridge between these management objectives and specific security requirements. The security policy informs users, staff, and managers of their obligatory requirements for protecting technology and information assets. The security policy should specify the mechanisms that you need to meet these requirements. The security policy also provides a baseline from which to acquire, configure, and audit computer systems and networks for compliance with the security policy. Therefore, an attempt to use a set of security tools in the absence of at least an implied security policy is meaningless.
Key Topic | The three reasons for having a security policy are as follows:
|
One of the most common security policy components is an acceptable use policy (AUP). This component defines what users are allowed and not allowed to do on the various components of the system, including the type of traffic that is allowed on the networks. The AUP should be as explicit as possible to avoid ambiguity or misunderstanding. For example, an AUP might list the prohibited website categories.
Note | Some sites refer to an acceptable use policy as an appropriate use policy. |
The audience for the security policy should be anyone who might have access to your network, including employees, contractors, suppliers, and customers. However, the security policy should treat each of these groups differently.
The audience determines the content of the policy. For example, you probably do not need to include a description of why something is necessary in a policy that is intended for the technical staff. You can assume that the technical staff already knows why a particular requirement is included. Managers are also not likely to be interested in the technical aspects of why a particular requirement is needed. However, they might want the high-level overview or the principles supporting the requirement. When end users know why a particular security control has been included, they are more likely to comply with the policy.
One document will not likely meet the needs of the entire audience of a large organization. The goal is to ensure that the information security policy documents are coherent with its audience needs.
Security Policy Components
Figure 1-20 shows the hierarchy of a corporate policy structure that is aimed at effectively meeting the needs of all audiences.
Most corporations should use a suite of policy documents to meet their wide and varied needs:
-
Governing policy: This policy is a high-level treatment of security concepts that are important to the company. Managers and technical custodians are the intended audience. The governing policy controls all security-related interaction among business units and supporting departments in the company. In terms of detail, the governing policy answers the “what” security policy questions.
-
End-user policies: This document covers all security topics important to end users. In terms of detail level, end-user policies answer the “what,” “who,” “when,” and “where” security policy questions at an appropriate level of detail for an end user.
-
Technical policies: Security staff members use technical policies as they carry out their security responsibilities for the system. These policies are more detailed than the governing policy and are system or issue specific (for example, access control or physical security issues). In terms of detail, technical policies answer the “what,” the “who,” the “when,” and the “where” security policy questions. The “why” is left to the owner of the information.
Note | Cisco has created a tool to help you create customized security policies for your organization. Visit http://www.ciscowebtools.com/spb/ to find out more about Cisco Security Policy Builder. Also, consider SANS security policies repository at http://www.sans.org/resources/policies. |
Governing Policy
The governing policy outlines the security concepts that are important to the company for managers and technical custodians:
-
The governing policy controls all security-related interactions among business units and supporting departments in the company.
-
The governing policy aligns closely with existing company policies, especially human resource policies, but also any other policy that mentions security-related issues such as email, computer use, or related IT subjects.
-
The governing policy is placed at the same level as all companywide policies.
-
The governing policy supports the technical and end-user policies.
A governing policy includes the following key components:
-
A statement of the issue that the policy addresses
-
A statement about your position as IT manager on the policy
-
How the policy applies in the environment
-
The roles and responsibilities of those affected by the policy
-
What level of compliance to the policy is necessary
-
Which actions, activities, and processes are allowed and which are not
-
What the consequences of noncompliance are
End-User Policies
The end-user policy is a single policy document that covers all the policy topics pertaining to information security that end users should know about, comply with, and implement. This policy may overlap with the technical policies and is at the same level as a technical policy. Grouping all the end-user policies together means that users have to go to only one place and read one document to learn everything that they need to do to ensure compliance with the company security policy.
Technical Policies
Security staff members use the technical policies in the conduct of their daily security responsibilities. These policies are more detailed than the governing policy and are system or issue specific (for example, router security or physical security issues). These policies are essentially security handbooks that describe what the security staff does, but not how the security staff performs its functions.
The following are typical policy categories for technical policies:
-
General policies
-
AUP: Defines the acceptable use of equipment and computing services, and the appropriate security measures that employees should take to protect the corporate resources and proprietary information.
-
Account access request policy: Formalizes the account and access request process within the organization. Users and system administrators who bypass the standard processes for account and access requests can cause legal action against the organization.
-
Acquisition assessment policy: Defines the responsibilities regarding corporate acquisitions and defines the minimum requirements that the information security group must complete for an acquisition assessment.
-
Audit policy: Conducts audits and risk assessments to ensure integrity of information and resources, investigates incidents, ensures conformance to security policies, or monitors user and system activity where appropriate.
-
Information sensitivity policy: Defines the requirements for classifying and securing information in a manner appropriate to its sensitivity level.
-
Password policy: Defines the standards for creating, protecting, and changing strong passwords.
-
Risk-assessment policy: Defines the requirements and provides the authority for the information security team to identify, assess, and remediate risks to the information infrastructure that is associated with conducting business.
-
Global web server policy: Defines the standards that are required by all web hosts.
-
-
-
Automatically forwarded email policy: Documents the policy restricting automatic email forwarding to an external destination without prior approval from the appropriate manager or director.
-
Email policy: Defines the standards to prevent tarnishing the public image of the organization.
-
Spam policy: The AUP covers spam.
-
-
Remote-access policies
-
Dial-in access policy: Defines the appropriate dial-in access and its use by authorized personnel.
-
Remote-access policy: Defines the standards for connecting to the organization network from any host or network external to the organization.
-
VPN security policy: Defines the requirements for remote-access IP Security (IPsec) or Layer 2 Tunneling Protocol (L2TP) VPN connections to the organization network.
-
-
Telephony policies
-
Analog and ISDN line policy: Defines the standards to use analog and ISDN lines for sending and receiving faxes and for connection to computers.
-
Personal communication device policy: Defines the information security’s requirements for personal communication devices, such as voicemail, IP phones, softphones, and so on.
-
-
Application policies
-
Acceptable encryption policy: Defines the requirements for encryption algorithms that are used within the organization.
-
Application service provider (ASP) policy: Defines the minimum security criteria that an ASP must execute before the organization uses them on a project.
-
Database credentials coding policy: Defines the requirements for securely storing and retrieving database usernames and passwords.
-
Interprocess communications policy: Defines the security requirements that any two or more processes must meet when they communicate with each other using a network socket or operating system socket.
-
Project security policy: Defines requirements for project managers to review all projects for possible security requirements.
-
Source code protection policy: Establishes minimum information security requirements for managing product source code.
-
-
Network policies
-
Extranet policy: Defines the requirement that third-party organizations that need access to the organization networks must sign a third-party connection agreement.
-
Minimum requirements for network access policy: Defines the standards and requirements for any device that requires connectivity to the internal network.
-
Network access standards: Defines the standards for secure physical port access for all wired and wireless network data ports.
-
Router and switch security policy: Defines the minimal security configuration standards for routers and switches inside a company production network or used in a production capacity.
-
Server security policy: Defines the minimal security configuration standards for servers inside a company production network or used in a production capacity.
-
-
Wireless communication policy: Defines standards for wireless systems that are used to connect to the organization networks.
-
Document Retention policy: Defines the minimal systematic review, retention, and destruction of documents received or created during the course of business. The categories of retention policy are, among others:
-
Electronic communication retention policy: Defines standards for the retention of email and instant messaging.
-
Financial retention policy: Defines standards for the retention of bank statements, annual reports, pay records, accounts payable and receivable, and so on.
-
Employee records retention policy: Defines standards for the retention of employee personal records.
-
Operation records retention policy: Defines standards for the retention of past inventories information, training manuals, suppliers lists, and so forth.
-
Standards, Guidelines, and Procedures
Security policies establish a framework within to work, but they are excessively general to be of much use to individuals responsible for implementing these policies. Because of this, other more detailed documents exist. Among the more important detailed documents are the standards, guidelines, and procedures documents.
Whereas policy documents are very much high-level overview documents, the standards, guidelines, and procedure documents are documents that the security staff will use regularly to implement the security policies.
Standards
Standards allow an IT staff to be consistent. They specify the use of specific technologies because no one can know everything. Standards also try to provide consistency in the network because it is unreasonable to support multiple versions of hardware and software unless it is necessary. The most successful IT organizations have standards to improve efficiency and to keep things as simple as possible.
Standardization also applies to security. One of the most important security principles is consistency. If you support 100 routers, it is important that you configure all 100 routers as similarly as possible. If you do not do this, it is difficult to maintain security. When you do not strive for the simplest of solutions, you usually fail in being secure.
Guidelines
Guidelines help provide a list of suggestions on how you can do things better. Guidelines are similar to standards, but are more flexible and are not usually mandatory. You will find some of the best guidelines available in repositories known as “best practices.” The following is a list of widely available guidelines:
-
National Institute of Standards and Technology (NIST) Computer Security Resource Center
-
National Security Agency (NSA) Security Configuration Guides
-
The Common Criteria Standard
-
Rainbow Series
Procedures
Procedure documents are longer and more detailed than the standards and guidelines documents. Procedure documents include the details of implementation, usually with step-by-step instructions and graphics. Procedure documents are extremely important for large organizations to have the consistency of deployment that is necessary to have a secure environment. Inconsistency is the enemy of security.
Security Policy Roles and Responsibilities
In any organization, it is senior management, such as the CEO, which is always ultimately responsible for everything. Typically, senior management only oversees the development of a security policy. The creation and maintenance of a security policy is usually delegated to the people in charge of IT or security operations.
Sometimes the senior security or IT management personnel, such as the chief security officer (CSO), the chief information officer (CIO), or the chief information security officer (CISO), will have the expertise to create the policy, sometimes they will delegate it, and sometimes it will be a bit of both strategies. But the senior security person is always intimately involved in the development and maintenance of security policy. Guidelines can provide a framework for policy decision making.
Senior security staff is often consulted for input on a proposed policy project. They might even be responsible for the development and maintenance of portions of the policy. It is more likely that senior staff will be responsible for the development of standards and procedures.
Everyone else who is involved in the security policy has the duty to abide by it. Many of the policy statements will include language that refers to a potential loss of employment for violation of the policy. IT staff and end users alike are responsible to know the policy and follow it.
Risk Analysis and Management
Every process of security should first address the following questions:
-
Which are the threats the system is facing?
-
Which are the probable threats and what would be their consequence, if exploited?
The threat-identification process provides an organization with a list of threats, which a system is subject to in a particular environment.
Note | An interesting method of modeling security threats is the Attack Trees method by Bruce Schneier. You can find more information about this method at http://en.wikipedia.org/wiki/Attack_tree. |
Risk analysis is a process that estimates the probability and severity of threats that a system needing protection faces. Risk analysis provides an organization with a prioritized list of risks, which the organization must mitigate, and allows an organization to focus on the most important threats first.
Risk Analysis
Risk analysis is the systematic study of uncertainties and risks. Risk analysts seek to identify the risks that a company faces, understand how and when they arise, and estimate the impact (financial or otherwise) of adverse outcomes. Risk managers start with risk analysis, and then seek to take actions that will mitigate these risks.
Two types of risk analysis are of interest in information security:
-
Quantitative: Quantitative risk analysis uses a mathematical model that puts numbers to the value of assets, the cost of threats being realized, and so on. Quantitative risk analysis provides an actual monetary figure of expected losses, which is typically based on an annual cost. You can then use this number to justify proposed countermeasures. For example, if you can establish that you will lose $1,000,000 by doing nothing, you can justify spending $300,000 to reduce that risk by 50 percent to 75 percent.
-
Qualitative: Qualitative risk analysis uses a scenario model. This approach is best for large cities, states, and countries to use because it is impractical to try to list all the assets, which is the starting point for any quantitative risk analysis. By the time a typical national government lists all of its assets, the list would have hundreds or thousands of changes and would no longer be accurate.
Quantitative Risk-Analysis Formula
Quantitative analysis relies on specific formulas to determine the value of the risk decision variables. These include formulas that calculate the asset value (AV), exposure factor (EF), single loss expectancy (SLE), annualized rate of occurrence (ARO), and annualized loss expectancy (ALE). The ALE formula is as follows:
The AV is the value of an asset. This would include the purchase price, the cost of deployment, and the cost of maintenance. In the case of a database or a web server, the AV should also include the cost of development. AV is not an easy number to calculate.
The EF is an estimate of the degree of destruction that will occur. For example, suppose that you consider flood a threat. Could it destroy our data center? Would the destruction be 60 percent, 80 percent, or 100 percent? The risk-assessment team would have to make a determination that evaluates everything possible, and then make a judgment call. For this example, assume that a flood will have a 60 percent destruction factor, because you store a backup copy of all media and data offsite. Your only losses would be the hardware and productivity.
As another example, consider data entry errors, which are much less damaging than a flood. A single data entry error would hardly be more than a fraction of a percent in exposure. The exposure factor of a data entry error might be as small as .001 percent.
Caution | One of the ironies of risk analysis is how much estimating (guessing) is involved. |
The SLE calculation is a number that represents the expected loss from a single occurrence of the threat. The SLE is defined as the AV * EF.
To use our previous examples, you would come up with the following results for the SLE calculations:
-
Flood threat
Exposure factor: 60 percent
AV of the enterprise: US$10,000,000
$10,000,000 * .60 = $6,000,000
-
Data entry error
Exposure factor: .001 percent
AV of data and databases: $1,000,000
$1,000,000 * .000001 = $10 SLE
The ARO is a value that estimates the frequency of an event and is used to calculate the ALE.
Continuing the preceding example, the type of flood that you expect could reach your data center would be a “flood of the century” type of event. Therefore, you give it a 1/100 chance of occurring this year, making the ARO for the flood 1/100.
Furthermore, you expect the data entry error to occur 500 times a day. Because the organization is open for business 250 days per year, you estimate the ARO for the data entry error to be 500 * 250, or 125,000 times.
Risk analysts calculate the ALE in annualized terms to address the cost to the organization if the organization does nothing to counter existing threats. The ALE is derived from multiplying the SLE by the ARO. The following ALE calculations continue with the two previous examples.
-
SLE: $6,000,000
ARO: .01
$6,000,000 * .01 = $60,000 ALE
-
Data input error
SLE: $10
AROL: 125,000
$10 * 125,000 = $1,250,000 ALE
A decision to spend $50,000 to enhance the security of our database applications to reduce data entry errors by 90 percent is now an easy decision. It is equally easy to reject a proposal to enhance our defenses against floods that costs $3,000,000.
When you perform a quantitative risk analysis, you will identify clear costs as long as the existing conditions remain the same. There is a list of expected issues, the relative cost of those events, and the total cost if all expected threats are realized. These numbers are put in to annual terms to coincide with the annual budgets of most organizations.
You then use these numbers in decision making. If an organization had a list of 10 expected threats, it could then prioritize the threats and address the most serious threats first. This prioritization enables management to focus their resources where it will do the most good.
For example, suppose an organization has the following list of threats and costs because of a quantitative risk analysis:
-
Insider network abuse: $1,000,000 in lost productivity
-
Data input error: $500,000
-
Worm outbreak: $100,000
-
Viruses: $10,000
-
Laptop theft: $10,000
Decision makers could easily decide that it is of greatest benefit to address insider network abuse and leave the antivirus solution alone. They could also find it easy to support a $200,000 URL filtering solution to address insider network abuse and reject a $40,000 solution designed to enhance laptop safety. Without these numbers from a risk analysis, the decisions made would likely differ.
Tip | In cases that involve national security, it is not advisable to base decisions on cost. |
Table 1-2 provides an example of threat identification for connecting an e-banking system to the Internet. It enumerates the following threats to the system, and the probability and severity of the impact on the bank should the threat materialize. The list of potential threats is by no means comprehensive, but includes the most obvious ones.
Threat | Description | Severity |
---|---|---|
Internal system compromise | The attacker could use the exposed e-banking servers to break into an internal bank system, causing substantial damage. | Extremely severe and likely, if untrusted software is used to pass data to the inside network. |
Stolen customer data from an external server | The attacker could, by breaking into the exposed application server, steal all, or a substantial amount of, personal and financial data of the bank customers from the customer database. | Severe and likely, if the external server is vulnerable to intrusions, which could compromise the operating system or the application. |
Phony transactions from an external server | The attacker could, by breaking into the external server, alter the code of the e-banking application, and run arbitrary transactions impersonating any legitimate user. | Severe and likely, if the external server is vulnerable to intrusions, which could compromise the operating system or the application. |
Phony transactions if the customer PIN or smart card is stolen | The attacker could steal the identity of a customer and run malicious transactions from the compromised account. | Limited severity, because individual accounts are compromised; likely only if the stolen credentials are not detected quickly. |
Insider attack on the system | A bank employee might find a flaw in the system to mount an attack. | Extremely severe and likely, because the bank has had its share of insider attacks on company data. |
DoS attacks | DoS attacks could interrupt the service, because the attacker might compromise the availability of the application, cutting off legitimate users and potentially causing a public relations nightmare. | Severe and likely, because tools to perform such attacks are easy to find, and defense against such attacks is limited. |
All the threats in Table 1-2 can lead to loss of reputation and customer trust.
After you identify threats and assess the risks, you must deploy a protection strategy to protect against the risks. There are two very different methods to handle risks:
-
Risk management: This method uses the deployment of protection mechanisms to reduce risks to acceptable levels. Risk management is perhaps the most basic and the most difficult aspect of building secure systems, because it requires good knowledge of risks, risk environments, and mitigation methods.
-
Risk avoidance: This method eliminates risk by avoiding the threats altogether, which is usually not an option in the commercial world, where controlled (managed) risk enables profits.
Risk Management
Continuing the example of a bank that wants to provide e-banking services and has identified threats and performed a risk analysis, risk management can be illustrated by high-level strategy decisions, which describe how to mitigate each risk to an acceptable level. Table 1-3 provides a list of the threats, the associated risk analysis, and the risk mitigation.
Threat | Risk Analysis | Risk Mitigation |
---|---|---|
Internal system compromise | Extremely severe and likely, if untrusted software is used to pass data to the inside network | Provide the least amount of privilege access possible to the inside, and utilize a secure multitiered application which minimizes inside access |
Stolen customer data from an external server | Severe and likely, if the external server is vulnerable to intrusions, which could compromise the operating system or the application | Keep all the customer data on inside servers, and only transfer data to the outside on demand |
Phony transactions from an external server | Severe and likely, if the external server is vulnerable to intrusions, which could compromise the operating system or the application | Design the external server application so that it does not allow arbitrary transactions to be called for any customer account |
Phony transactions if the customer PIN or smart card is stolen | Limited severity, because only individual accounts are compromised; likely only if the stolen credentials are not detected quickly | Use a quick refresh of revocation lists and have a contract with the user which forces the user to assume responsibility for stolen token cards |
Insider attack on the system | Extremely severe and likely, because the bank has had its share of insider attacks on company data | Strictly limit inside access to the application and provide strict auditing of all accesses from the inside |
DoS attacks | Severe and likely, because tools to perform such attacks are easy to find, and defense against such attacks is limited | Provide high-performance connectivity to the Internet, deploy of quality of service (QoS), implement high availability by using multiple Internet connections, and protect the server by hardening and implementing firewall DoS defense methods |
Using the risk-avoidance approach, the company would decide not to offer the e-banking service at all because they deem it too risky. Such an attitude might be valid for most military organizations, but is usually not an option in the commercial world. Organizations that can manage the risks, and not avoid them, are traditionally the most profitable.
A different way of thinking about security might be this: “If we can figure out a way to provide a service securely, we will earn a lot of money.” This attitude moves away from the paranoid thinking of many risk analysts, who might try to find a reason not to deploy a certain service. Sometimes it can help to take a fresh look at what risk management is all about.
Principles of Secure Network Design
Business goals and risk analysis drive the need for network security. Regardless of the security implications, business needs must come first. If your business cannot function because of security concerns, you have a problem. The security system design must accommodate the goals of the business, not hinder them. Risk analysis includes two key elements:
-
What does the cost-benefit analysis of your security system tell you?
-
How will the latest attack techniques play out in your network environment?
Figure 1-22 illustrates the key factors you should consider when designing a secure network:
-
Business needs: What does your organization want to do with the network?
-
Risk analysis: What is the risk and cost balance?
-
Security policy: What are the policies, standards, and guidelines that you need to address business needs and risks?
-
Industry best practices: What are the reliable, well-understood, and recommended security best practices?
-
Security operations: These operations include incident response, monitoring, maintenance, and auditing the system for compliance.
Realistic Assumptions
Historically, a huge percentage of security mechanisms are broken, misconfigured, or bypassed because the designer or implementer made unfounded assumptions about how and where the system will be used; for example, wrong assumptions were made about the users of the system, the attackers and threats, and the technology that is used to build the system.
A wrong assumption ends up being used as a bad axiom in all further design work; it might influence one design decision, and then propagate to other decisions that might depend on it. Wrong decisions are especially dangerous in early stages of secure system design, when threats are modeled and when risks are assessed. It is often easy to correct or enhance an implementation aspect of a system; however, design errors are either extremely hard or impossible to correct without substantial investments in time and technology.
The following is a summary of recommendations you should follow to avoid making wrong assumptions:
-
First, expect that any aspect of a system might fail, and evaluate how this failure affects the security of a system. It is possible for every single element of a system to fail; only the probability of failure might differ for different elements. When designing a system, perform “what-if” analysis for failures of every element, assess the probability of failure, and analyze all possible consequences of an element failure, taking into account consequent cascading failures of other elements.
-
As a part of the “anything can fail” mindset, identify any elements that “fail open.” Fail open occurs when a failure results in a complete bypass of the security function of the element. Ideally, any security element should be fail-safe; if the element fails, it should default to a secure state, such as blocking all traffic across it.
-
Try to identify all attack possibilities. The Attack Tree method is one successful method of top-down analysis of possible system failures, which involves evaluating the simplicity and probability of every attack.
-
Realistically evaluate the probability of exploitation. An often-encountered philosophy is “if there is no exploit code available for a particular vulnerability, no one will be able to exploit it.” This philosophy is true only for script kiddie attacks, and a sounder stance must be taken, such as “if a vulnerability exists, any skilled and focused attacker will easily write a tool to exploit it.” The focus should be on the resources needed to create an attack tool, not on the obscurity of the vulnerability.
-
Always account for technological advances if an attack is currently unlikely because the attacker needs many resources. As computer power increases, the probability of attacks might increase with an alarming rate. Many systems have been compromised because of unrealistic assumptions about how much computing power was necessary to mount successful attacks (the recommended lengths of cryptographic keys are a prime example).
-
Assume that people will make mistakes. For example, end users might use a system improperly, compromising its security unintentionally. Likewise, attackers will not use common and well-established techniques to compromise a system; they might hammer the system with seemingly random attacks, looking for possible information on how the system behaves under unexpected conditions.
-
Always check your assumptions with other people, who might have a fresh perspective on potential threats and their probability. The more people who question your assumptions, the more likely you can identify a bad assumption.
Incorrect Assumptions: Cautionary Tales
Three examples of wrong assumptions come from areas not directly related to network security.
The encryption of DVD movies, which uses a weak algorithm called Content Scrambling System (CSS), is an example of bad assumptions made about the scope of system use. The original assumption was that DVDs would be played only on hardware players, where the decryption keys could be stored in a tamper-resistant chip inside the player, making it extremely hard for even skilled attackers to compromise the DVDs. However, when software DVD players appeared, the DVDs were quickly reverse engineered, because making software tamper resistant is next to impossible against a determined attacker. The keys were recovered from one of the well-known players, and an algorithm was published on the Internet, together with the keys.
The response strategy of the DVD industry was to try to ban the publishing of the CSS algorithm and keys, but the decision of the court that the CSS algorithm source code was essentially free speech stopped much of their efforts.
Another example of a wrong or poor assumption was the lack of encryption of U.S. cellular traffic. When cellular phones were first introduced, the assumption was that scanners, which could intercept cellular traffic, were too expensive to mount any large-scale attacks against call confidentiality in cellular networks. In a couple of years, the price of these scanners dropped to the point that the scanners were available to almost anyone. Thus, bad assumptions compromised the protection policy of the cellular network.
The next-generation U.S. cellular service uses digital transmission, but the same assumption was made, that digital scanners are too expensive. As technology advances, the same story has unfolded for the digital transmissions.
Concept of Least Privilege
The least privilege concept is a philosophy in which each subject, user, program, host, and so on should have only the minimum necessary privileges to perform a certain task.
The rationale behind the concept is that having too many privileges for a task can result in doing more damage than would be otherwise possible, whether the damage is intentional or unintentional. Using the least privileges always narrows down the window of vulnerability, because it reduces the number of possible side effects of a task. Least privilege also simplifies a system when you analyze it for possible flaws, because if you allow only a very limited number of prescribed actions and system states, the potential for unwanted interactions within a system is limited.
In practice, the least privilege concept is often not followed, because a person or process must perform multiple tasks that require different privileges. Because the configuration of privileges in such an environment is often cumbersome, a person or process is given high (or even worse, the highest possible) privileges, which automatically enables them to perform a variety of tasks, including the tasks originally required. This configuration of privileges opens up a system to additional threats and interactions, which might not be expected.
Figure 1-23 shows an example of proper least privilege enforcement. A web server is located inside a firewall system and must be accessed by inside and outside users. No other access to the system is necessary, and the system does not need to open any connections itself (it is a simple static web server).
In Figure 1-23, the firewall is configured to permit only HTTP connectivity to the server from the inside interface to the outside interface. The firewall denies all other connections to the server because they are not necessary. Also, the firewall prevents the web server from sourcing any connections because they are not required. An attacker who compromises the web server would be isolated on it because no connectivity is allowed from the web server.
In such a situation, many organizations would permit all access to the web server from the inside. This level of access opens up the server for insider attacks, or enables an attacker who manages to enter the protected network to also attack any service running on the web server.
You can see another example of least privilege enforcement by looking at the web server host itself. The host runs an exposed web server program, which is expected to be attacked by external crackers. Therefore, the web server program must be protected, and at the same time, other processes and data on the host must be protected from the attacker, who can potentially compromise the web server program. To protect the rest of the operating system, you can use several well-known techniques, all of which implement the least privilege concept:
-
Run the web server program under a special username, which has minimal rights in the host operating system (it can listen on port 80 and it can access its data on disk).
-
Set the file permissions in such a way that the web server program can access only its executable code (which is not owned by it, so it cannot be changed by it) and the documents it is serving (HTML, multimedia files).
-
Configure the operating system to limit the web server program to be a part of the file system, disallowing it access to any other directories (for example, using the UNIX chroot system call).
Concept of Simplicity
Complexity is one of the biggest enemies of security. Complexity makes it hard for the designer or implementer to predict how parts of the system will interact, and makes the system extremely difficult to analyze from the security perspective. Simplicity of design and implementation should therefore be one of the main goals of the designer.
When you must implement a security mechanism, it is always recommended to use the simplest possible solution, which still provides an adequate level of security. When you need to put a very complex mechanism in place, consider replacing it with multiple simpler and easier-to-verify mechanisms, as long as the resulting protection strength is comparable to the original idea.
Also, simplicity is beneficial for the end users of the system. If the end user does not understand the system adequately, the system can be compromised through unintentional misuse. It is important to note that end users do not need to be aware of the internal workings of the system, but the usage instructions should be simple and concise, as far as security is concerned.
You can find an example of design and implementation simplicity in the formulation of a user security policy.
Two ways to formulate the same security policy that relates to the end-user responsibilities are as follows:
-
Complex rule: All end users will participate in risk mitigation by enforcing discretionary access control on file system objects in such a way as to prevent external subjects from violating the integrity of the properties or contents of an object.
-
Simple rule: When changing file permissions, ensure that only Cisco employees will have “write” access to that file.
An overly technical, confusing formulation alienates users, whereas a simple and concise formulation enables the user to easily comprehend the required procedures and understand why such protection must be put in place.
In short, simplicity in design often makes the implementation of security simpler.
You can also achieve simplicity by intentionally removing functionality from existing systems. This concept introduces the well-known practice of disabling all unnecessary services that a system offers. Disabling these services removes many potential attack possibilities. You could identify this as the enforcement of least privilege (running only the minimal necessary set of services), and it makes the system easier to analyze.
Another way to simplify security is to help simplify end-user functions. For example, if email needs to be encrypted when it goes to external business partners, a solution that would be the simplest for end users is to take the end users out of the equation and use technology to perform automated encryption of the email. A mail gateway can be configured to automatically encrypt all outgoing mail.
Security Awareness
Technical, administrative, and physical controls can all be defeated without the participation of the end-user community. To get accountants and secretaries to think about information security, you must regularly remind staff members about security. The technical staff also needs regular reminders because their jobs tend to emphasize performance rather than secure performance. Therefore, leadership must develop a nonintrusive program that keeps everyone aware of security and how to work together to maintain the security of their data. The three key components used to implement this type of program are awareness, training, and education.
An effective computer security-awareness and -training program requires proper planning, implementation, maintenance, and periodic evaluation. In general, a computer security-awareness and -training program should encompass the following seven steps:
Step 1 | Identify program scope, goals, and objectives. The scope of the program should provide training to all types of people who interact with IT systems. Because users need training that relates directly to their use of particular systems, you need to supplement a large organizationwide program with more system-specific programs. |
Step 2 | It is important that trainers have sufficient knowledge of computer security issues, principles, and techniques. It is also vital that they know how to communicate information and ideas effectively. |
Step 3 | Identify target audiences. Not everyone needs the same degree or type of computer security information to do his or her job. A computer security-awareness and -training program that distinguishes between groups of people, presents only the information that is needed by the particular audience, and omits irrelevant information will have the best results. |
Step 4 | Motivate management and employees. To successfully implement an awareness and training program, it is important to gain the support of management and employees. Consider using motivational techniques to show management and employees how their participation in a computer security and awareness program will benefit the organization. |
Step 5 | Administer the program. Several important considerations for administering the program include visibility, selection of appropriate training methods, topics, materials, and presentation techniques. |
Step 6 | Maintain the program. You should make an effort to keep abreast of changes in computer technology and security requirements. A training program that meets the needs of an organization today may become ineffective when the organization starts to use a new application or changes its environment, such as by connecting to the Internet. |
Step 7 | Evaluate the program. An evaluation should attempt to ascertain how much information is retained, to what extent computer security procedures are being followed, and the general attitudes toward computer security. |
A successful IT security program consists of the following:
-
Developing IT security policy that reflects business needs tempered by known risks
-
Informing users of their IT security responsibilities, as documented in agency security policy and procedures
-
Establishing processes for monitoring and reviewing the program
You should focus security awareness and training on the entire user population of the organization. Management should set the example for proper IT security behavior within an organization. An awareness program should begin with an effort that you can deploy and implement in various ways and be aimed at all levels of the organization, including senior and executive managers. The effectiveness of this effort usually determines the effectiveness of the awareness and training program and how successful the IT security program will be.
An awareness and training program is crucial because it is the vehicle for disseminating information that users, including managers, need to do their jobs. An IT security program is the vehicle that you use to communicate security requirements across the enterprise.
An effective IT security-awareness and -training program explains proper rules of behavior for the use of the IT systems and information of a company. The program communicates IT security policies and procedures that must be followed. This program must precede and lay the foundation for any sanctions that your company will impose for noncompliance. You should first inform the users of the expectations. You must derive accountability from a fully informed, well-trained, and aware workforce.
Security awareness efforts are designed to change behavior or reinforce good security practices. Awareness is defined in NIST Special Publication 800-16 as follows:
Awareness is not training. The purpose of awareness presentations is simply to focus attention on security. Awareness presentations are intended to allow individuals to recognize IT security concerns and respond accordingly. In awareness activities, the learner is the recipient of information, whereas the learner in a training environment has a more active role. Awareness relies on reaching broad audiences with attractive packaging techniques. Training is more formal, having a goal of building knowledge and skills to facilitate the job performance.
An example of a topic for an awareness session (or awareness material to be distributed) is virus protection. You can briefly address the subject by describing what a virus is, what can happen if a virus infects a user system, what the user should do to protect the system, and what users should do if they discover a virus.
Training strives to produce relevant and needed security skills and competencies by practitioners of functional specialties other than IT security (for example, management, systems design and development, acquisition, and auditing). The most significant difference between training and awareness is that training tries to teach skills that allow a person to perform a specific function, whereas awareness focuses the attention of an individual on an issue or set of issues. The skills that users acquire during training build on the awareness foundation (in particular, on the security basics and literacy material). A training curriculum does not necessarily lead to a formal degree from an institution of higher learning; however, a training course might contain much of the same material found in a course that a college or university includes in a certificate or degree program.
An example of training is an IT security course for system administrators, which should address in detail the management controls, operational controls, and technical controls that should be implemented. Management controls include policy, IT security program management, risk management, and life cycle security. Operational controls include personnel and user issues, contingency planning, incident handling, awareness and training, computer support and operations, and physical and environmental security issues. Technical controls include identification and authentication, logical access controls, audit trails, and cryptography.
Education integrates all the security skills and competencies of the various functional specialties into a common body of knowledge; adds a multidisciplinary study of concepts, issues, and principles (technological and social); and strives to produce IT security specialists and professionals capable of vision and proactive response.
An example of education is a degree program at a college or university. Some people take a course or several courses to develop or enhance their skills in a particular discipline. This is training as opposed to education. Many colleges and universities offer certificate programs, wherein a student may take two, six, or eight classes (for example, in a related discipline), and are then awarded a certificate upon completion. Often, these certificate programs are conducted as a joint effort between schools and software or hardware vendors. These programs are more characteristic of training than education. Those responsible for security training must assess both types of programs and decide which one better addresses their identified needs.
A successfully implemented training and awareness program, in conjunction with a good security operations practice, should result in many benefits to an organization. The technical staff should be better at implementing the technical controls. End users, executives, and everyone else should also do a better job of implementing the remaining administrative and physical controls. The resulting more thorough implementation of a well-designed set of controls is guaranteed to increase security.
0 comments
Post a Comment