The Smith Company* recently started using a new enterprise resource planning (ERP) system to better integrate its human resources (HR) and finance information. Unbeknown to the IT and HR departments, a bug in one of the modules is allowing confidential employee and client information to be accessed by all staff. Two weeks after the system is implemented, the IT department finds out about the issue and quickly resolves the problem. However, important company information has been exposed for nearly 14 days. How could the organization have prevented this from happening in the first place? (*Note: This is a fictional company.)
As seen in the example above, a security breach that results from a system's programming error can place important business information into the hands of an unauthorized user, severely damaging the organization's credibility and placing the company in breach of a privacy regulation. To reduce these operational risks (i.e., the risks of loss resulting from inadequate or failed internal processes, people or systems, and external events), internal auditors can help organizations implement a risk analysis process that classifies software vulnerabilities and measures the benefits of removing software defects. Implementing such a process will enable organizations to reduce defects in enterprise software systems, identify the most cost-effective countermeasures to mitigate risks, and help auditors to perform better assessments of information security initiatives.
|Table 1: Aggregated Vulnerability Distrubution by Type|
|Accidental disclosure by email||5||3.0%|
|System user and operator weaknesses||12||7.8%|
|Unprotected computers and backup media||67||40.1%|
|Exploitation of software defects||82||49.1%|
Software defects are a major factor leading to fraud, a fact that is exemplified in a 2005 study that classified 167 data breaches based on their attack method, source, and exploited vulnerability. The study, based on information provided by the nonprofit consumer information group Privacy Rights Clearinghouse, found that nearly half of the data breaches took advantage of known software defects (see Table 1). In addition, Carnegie Mellon University's Software Engineering Institute (SEI) reports that 90 percent of all software vulnerabilities are due to well-known defect types (e.g., using a hard-coded server password), and all of the SANS Top 20 Internet security vulnerabilities are the result of poor coding and testing.
In many organizations, the only defense strategy against a security breach is to deploy multiple tools at the network perimeter, such as firewalls, intrusion prevention software, and malicious content filtering applications. Although these tools protect organizations from external threats, according to the Computer Security Institute and the U.S. Federal Bureau of Investigation's 2006 Computer Crime and Security Survey (PDF, 1.55 MB), the majority of attacks on customer data and intellectual property exploit internal vulnerabilities. Therefore, perimeter security products are unable to defend organizations from operational risks that are the result of faulty system processes. For instance, a firewall cannot protect the organization from ERP downtime due to a software defect, and vulnerability checklists, which are commonly used to identify threats to IT assets, are not a replacement for the in-depth understanding needed to properly identify specific source code weaknesses. If many data security breaches are the result of software defects, how can organizations identify them in the first place?
Figure 1: Risk Analysis Diagram.
While software security assessment activities focus primarily on new code development, reduction of vulnerabilities in enterprise software (i.e., software that manages sales, inventory, purchasing, and manufacturing transactions) can be a highly effective approach for assessing and reducing operational risks due to data breaches and critical system downtime. However, not many organizations have a systematic way to reduce enterprise software defects; if this were an easy process, every organization would be doing it. So, what makes these defects difficult to identify?
- The typical top-down creation processes used by developers is inappropriate for conducting risk analyses of enterprise software systems. This is because unlike a software implementation process that can be abstracted (i.e., designers can take their own ideas and user feedback to synthesize and construct a design starting from a sheet of paper), enterprise software applications become integrated with operational business processes. Therefore, after the software is implemented, changes can be made and new ways of using the application can be found that the designers did not foresee.
- The cost of finding and fixing a bug in a system is often 100 times more expensive than finding and fixing the problem during the application's design phase (Boehm and Basili, IEE Computer, 2001). For example, a software design specification might be 50 pages, but the finished program might have 50,000 lines of code and depend on hundreds of libraries and modules.
- In many organizations, application developers — who work in a fairly structured environment — and IT security teams — which often report to different people and must resolve multiple security problems at the same time — usually don't have the time to talk to each other. Oftentimes, the larger the organization, the more information gets lost during the project development process.
Despite these challenges, all is not lost. To surpass these obstacles, organizations can implement a risk analysis process that collects data from all business units and departments affected by the software. More specifically, risk analyses of enterprise software need to identify, classify, and evaluate application vulnerabilities and recommend cost-effective countermeasures.
In addition, auditors should recommend that this risk analysis be conducted by a risk assessment team of four to eight members with relevant knowledge of the company and software, chosen at a planning meeting between the company's lead analyst and project sponsor (i.e., a senior manager, such as a finance controller or vice president of operations). Following is a more detailed explanation of the seven steps that form part of the risk analysis process for enterprise security software.
1. Set the Scope
The first thing the team needs to do is to determine the scope of the risk analysis in terms of the particular business units that use the software and the IT assets that depend on its functionality. The process also needs to have the support of senior management so that policies and procedures resulting from the risk analysis and risk mitigation plan are implemented throughout the entire organization and are adhered to by all company personnel.
2. Identify Business Assets
Figure 2: Wall Chart That Identifies Business Assets.
During this stage, the team identifies operational business functions and their key assets. This part of the process can be done using wall charts as shown in Figure 2. The graphic format of wall charts helps the team to visualize the scope of IT assets and estimate the potential impact of threats on all assets. In Figure 2, business functions, which are represented by white boxes, are placed on a diagonal from top left to bottom right. Assets, which are colored in green, flow clockwise around the business functions.
3. Identify Software Components
After identifying business functions, the team should make a list of all the application functions that serve each business unit and by deconstructing each function into specific software components. To help build a consistent and high-level view of the system, this part of the process can be done using the wall charts shown in Figure 3. In this illustration, application functions, which are depicted in the white boxes, are placed on a diagonal from top left to bottom right and the software components, which are colored in tan, flow clockwise around the diagonal of application functions.
4. Classify Vulnerabilities
Figure 3: Wall Chart Identifying Software Components.
To classify software vulnerabilities, common vulnerability scoring system (CVSS) numbers — which are a standard way of conveying vulnerability severity — can be calculated for each component identified in the previous steps. In addition to the CVSS score, the team can collect information using the comprehensive, lightweight application security process, a system that provides a well-organized and structured approach for identifying security concerns in the early stages of the software development life cycle. The knowledge acquired by classifying software vulnerabilities gives the team baseline knowledge of existing software threats that evolve over time as the team continues to identify new vulnerabilities. Various source code scanners also can be used when classifying vulnerabilities, such as FindBugs, which enables organizations to find problems written in Java.
5. Build the Threat Model
At this stage, the team can build a threat model based on the information collected earlier or use an existing one, such as the Practical Threat Analysis tool, a free data threat modeling methodology that helps organizations manage risks in their systems and build appropriate risk mitigation policies.
Figure 4: How to Build a Threat Model.
During this stage, collected assets are assigned a financial value, threats are named and classified based on their occurrence probability and damage levels, and vulnerabilities are linked to actual software threats.
6. Build the Risk Mitigation Plan
After the threat model is created, the team identifies countermeasures for each vulnerability and records them in the data threat modeling tool. While the best countermeasure for a problem is fixing it, in reality, proper documentation may not exist because the programmer who wrote the code may not be employed with the organization. This means that other countermeasures may be required, such as the use of code wrappers or application proxies (i.e., an application that reduces the risk of Web-based attacks by restricting requests to Port 80 and enforcing rules for well-formed HTTP requests).
Regardless of the countermeasure chosen, they all perform one or all of the following actions: retain the existing component (e.g., leave the defects in place), modify the vulnerability (e.g., fix the component or put in a workaround), or add a component (e.g., call the global lightweight directory
Figure 5: Risk Mitigation Plan Depicting All Countermeasure Types.
access protocol to authenticate online users instead of using a proprietary customer table). Furthermore, each countermeasure is assigned a cost and mitigation level. These costs can be fixed or variable, such as a one-time fee for fixing a problem or an ongoing maintenance fee.
7. Validate Findings
When validating findings, the objective is to qualify software components and vulnerabilities based on where they are in the system, which assets are involved, what actions they perform, why they perform the way they do, and when a component is activated. No limits should be placed on what questions are asked. Users may downgrade low-risk software components, escalate others for priority attention, or add or remove assets from the model. For example, a server-side order confirmation script that sends e-mails to customers may have received a low CVSS score, leading the team to eliminate the vulnerability from the list.
Implementing the steps above will help organizations construct a systematic risk assessment process that examines operational threats to enterprise software. This risk assessment information will provide executive managers with a clear picture of the top operational risks that affect the organization the most. The information also will help business decision makers choose and implement the most cost-effective controls based on the operational risks identified.
In addition to these steps, auditors can recommend that the risk assessment team provide executives with the financial benefits of reducing defects by quantifying risks in terms of the number of software vulnerabilities, the threats the organization would be exposed to if the threat is not corrected, and the IT assets that are affected by these threats. This enables executives to understand the value of correcting software vulnerabilities earlier in the application's life cycle, as well as ensures their support for the project.
As mentioned earlier, communication between software developers and information security staff is crucial when identifying enterprise software defects. To this end, auditors need to recommend that developers and information security staff communicate regularly during the software development and risk assessment process. This communication can take the form of regular meetings or can be facilitated by an online knowledge base and ticketing tool, such as Issue-Tracker, that provides an updated picture of well-known defects and security events. For instance, by publishing CVSS scores and countermeasure costs, the developer and security staff can respond to a particular type of event in a consistent fashion.
Finally, the complexity of enterprise systems requires that those responsible for performing the risk assessment have the necessary skills to do so. To maximize the team's efforts, auditors must recommend that expert risk analysts who can facilitate the process become part of the assessment team. These analysts can help organizations to identify fault-prone modules in companywide operations that have the most impact on system reliability and downtime, develop sustaining metrics to reduce defects, train application programmers in best security practices, and help the organization choose risk analysis processes that have a high return on investment.