Business applications cover the gamut of an organization's operations. From accounting packages and intranet portals to comprehensive enterprise resource planning (ERP) systems, nearly 100 percent of an organization's mission-critical data flows through these applications. The role of IT auditors, therefore, is to determine whether proper controls are in place to protect the information residing in these systems.
Auditors can use various approaches when carrying out a comprehensive audit of an application's security controls. Learning about each of these assessment techniques will enable auditors to determine ahead of time which method will yield the most optimal results as well as provide auditors with the knowledge they need to better evaluate an application's security functionality.
Assessments of an application's security features can range in detail and scope. The most widely used methods for evaluating system security controls include the use of high-level design audits, black-box or penetration tests, and source code reviews. The next three sections provide a more detailed description of each assessment option.
High-level Design Audits
High-level design audits are conducted to evaluate an application's overall design as well as identify the flow of information throughout the application environment; any sensitive data that the application creates, stores, and transmits; and threats to the information in question. As part of the audit, the IT auditor should evaluate different countermeasures that can help the organization mitigate the risks posed by each threat.
A high-level design audit is best conducted at the application's design stage and not after development or testing begins. These audits also can be conducted prior to a code review so that specific functions or pieces of code can be assessed, rather than doing a more tedious line-by-line review at a later stage.
A commonly used way for conducting a high-level design audit is the threat modeling exercise, a technique that helps to identify threats, attacks, vulnerabilities, and countermeasures within the application. The objectives of the threat modeling exercise are to help understand where the application is more vulnerable, evaluate all threats to the application, and reduce overall security risks. These objectives usually are conducted in three steps:
- Identifying assets.
- Detecting and rating threats.
- Discovering vulnerabilities.
Following is an example that illustrates how to perform a threat modeling exercise for an instant messaging (IM) application based on these three steps.
Threat Modeling of an IM Application
IM applications are peer-to-peer software that allow text and voice communication between two or more users. Common IM applications are Yahoo! Messenger, MSN Messenger, Google Talk, and AOL Instant Messenger. Threat modeling exercises for IM applications usually include the following elements:
- An overview of the application and its security objectives.
- An identification of assets.
- A detection and rating of threats.
- An identification of vulnerabilities.
Below is a description of each element.
The application's security objectives should be stated clearly. For an IM application, these might be proper authentication of user credentials, secure communication between IM clients, availability of the messaging service, and secured session management.
IM applications typically have a client-server architecture. As a result, it is important to identify the components of the application and the communication strategy among these disparate, yet connected architecture segments. The main components of an IM application and its functions include:
- Client activities (e.g., sending and receiving messages, adding and deleting contacts, and customizing the client environment).
- Server activities (e.g., managing the database of users subscribed to the IM service, overseeing session details, and providing notification functionality).
- IM communication protocols (e.g., identifying specific message formats and sequences).
The IM software stores and transmits sensitive information, including usernames and passwords, profiles and other customized user data, and files sent and received.
The IM application's client-server architecture may be vulnerable to threats, such as:
- Identity thefts, which are exploited by weak authentication and session management mechanisms.
- Data thefts, which are exploited by insecure access control mechanisms.
- Privacy breaches, which are exploited through weak authentication or server protection mechanisms.
- Remote code executions, which are exploited through buffer overflows.
- Social engineering tactics, which are exploited through phishing and cross-site scripting attacks.
One of the most crucial steps in the threat modeling process is identifying the application's vulnerabilities. These may include:
- Message field overflows. The attacker could construct a message that causes the remote IM client to crash by overflowing the message field or by overflowing other IM components.
- File transfer buffer overruns. A file name with excessively long names can cause a buffer overflow when the client's IM tries to download the file from the server.
- Cross-site scripting. HTTP-based IM components can allow malicious scripts to be injected and executed at the user's end.
- Username spoofs. An attacker can spoof a valid session ID and flood a remote user client without being identified.
For more information on threat modeling, IT auditors can visit Microsoft's Application Threat Modeling Web page. Microsoft also has developed a free threat modeling tool that can be downloaded from its Web site (Microsoft Word).
Also called penetration tests, a black-box test helps IT auditors to identify and exploit vulnerabilities using fault-injection techniques (i.e., randomly introducing failures at a lower architectural layer than the software being tested). These types of tests are most commonly done for Web applications and are conducted when the testing team does not have a clear idea of the application's internal workings. The objective of the exercise is to compromise the security of the data housed or used in the application by:
- Testing communication behavior.
- Identifying fault-injection points.
- Identifying the application's client-side behavior.
- Testing interactions with third-party applications.
- Interpreting files.
- Performing cryptanalysis.
Below is description of each.
Testing Communication Behavior
When testing communication behavior, the overall communication among clients and between the client and the server are tested. Some of the points to address include:
- The message formats and sequences of communication exchanges among users.
- The message formats and sequences of communication exchanges between the user and the server.
- How sessions are maintained and tracked and the uniqueness of each session ID or other identifiers.
- The possibility of a malicious user impersonating another user by replaying certain messages.
Identifying Fault-injection Points
A fault injection point is the point in the application that receives input for processing. A major security problem with a number of applications is the lack of proper input validation at each fault-injection point. This can lead to a number of attacks being launched against the application, including buffer overflows (i.e., entering more data than the input buffer is designed to handle), format string bugs (i.e., entering formatting characters instead of string data), and other invalid character insertions. Possible fault-injection points include ports opened by the application, message sequences and formats for the features that are offered by the application, and insecure file access and registry permissions.
Identifying the Application's Client-side Behavior
To identify the application's client-side behavior, auditors need to understand how the client software interfaces with the underlying operating system (i.e., knowing which system files, registry keys, and other parameters the application uses and changes).When identifying client-side behavior the auditor can use tools that scan the memory of the system, evaluate which parameters the application stores in memory, and determine which parameters have sensitive data. In addition, these tools should identify registry keys, system files and folders, and Windows application programming interfaces accessed by the application.
Interpreting File Formats
Applications often have the ability to read different file formats and interpret them. One common example is .INI files, which store configuration information the application can use. When reading and interpreting these files, the application might expect the input to be in a specific format. However, if the file can be manipulated, the format can be modified and the application could malfunction when it has to interpret nonstandard data inputs. Therefore, the application must have correct input validation to deal with nonstandard file formats.
Cryptanalysis is the study of methods for obtaining the meaning of encrypted information without having access that is normally required. Most applications use some form of encryption either when storing confidential data on a disk or when transmitting confidential data over a network. Cryptanalysis can then be performed to evaluate the strength of the encryption algorithms and key exchange protocols in use. This penetration testing technique consists of tests that check the format, location, and procedures for saving the client's sensitive information and the algorithms used for the communication and exchange of encryption activities that take place over the network.
Different tools are available online that can help IT auditors conduct more effective black-box tests. For a list of some of the most common tools, read "List of Commonly Used Black-box Testing Tools" (Microsoft Word).
Source Cose Reviews
Source code reviews involve the evaluation of the application's source code while looking for common software security problems in a systematic manner. During source code reviews, the audit team determines the application's most critical or high-risk areas and focuses its efforts on those areas. Different tools are available for auditors who wish to conduct a source code audit. For a list of some of the most commonly used free tools, read "List of Source Code Audit Tools"(Microsoft Word).
Security bugs usually are identified during source code reviews. Often semantic in nature, security bugs are the result of a coding error or bad design decision. An example of a bug is a buffer overflow where the memory allocated is not sufficient to hold the data being passed to it, which results in the writing of data to a privileged memory location. This writing of data can allow an attacker to overwrite the privileged memory location with malicious code that is then executed by the application.
Security bugs, such as a buffer overflow or format string vulnerability, tend to be language specific. Hence, each code section needs to be reviewed for threats that belong to the following vulnerability categories:
- Session management.
- Data validation.
- Exception management.
- Auditing and logging.
Following is a brief description of each.
As mentioned earlier, applications use encryption techniques when storing or transmitting sensitive information. When reviewing cryptographic vulnerabilities, auditors should identify key generation, storage, transmission, and disposal mechanisms as well as the encryption algorithms and key exchange protocols being used.
Authentication refers to validating a user's supplied credentials, typically a username and password. When reviewing authentication methods, auditors should determine:
- How users are identified and are authenticating themselves to the application.
- The protocols used to communicate the authentication information and how the information is secured.
- Mechanisms for storage of authentication information.
- The mechanisms that exist to prevent identity theft.
Most Web-based applications use an identifier, called a session ID, to identify a specific user session.When reviewing session management activities, auditors should determine whether the algorithm used to generate a session ID is random, which mechanism is being used to expire a session ID, and how session IDs are verified once an authenticated user has been allocated the ID.
Input validation is the key to building a secure application. The methods used to validate data should be checked for the following:
- How the user input is validated (i.e., Are all acceptable inputs identified while everything else is rejected? Which is the preferred option? Are all unacceptable characters and inputs identified and rejected, thus accepting everything else?).
- Whether the input validation is centralized or each module has its own input validation.
When an application malfunctions it should be able to handle the exception gracefully by showing the user a generic error message or by returning to a previously known stable state. When reviewing exception management activities, auditors should identify the error handling mechanism deployed throughout the application. In addition, auditors need to determine if the application reveals any sensitive information through an error message. Often during the testing stage, error messages need to be more verbose and descriptive, but when the application is deployed and goes into production, the error messages should be far less descriptive, while still providing enough help for the user to proceed normally.
Applications often have different privilege levels for different groups of users. The method by which a user is identified for the specific privilege levels applicable to him or her is known as authorization.When reviewing authorization activities, auditors should evaluate the different user levels created, the privileges that are provided to each user level and how they are enforced, and whether it is possible for attackers to bypass authorization mechanisms to elevate their privilege levels.
Auditing and Logging
Applications archive certain events to log files. Usually, log levels and data can be configured. When auditing and reviewing logging activities, auditors should identify the events that are being logged, whether the audit log is secured and how, and the reporting and filtering mechanisms that are provided for the audit trail.
Security vulnerabilities can lead to significant financial losses. What's more, the cost of fixing the vulnerability rises exponentially as an application progresses through its development life cycle. Based on the application's criticality and the sensitivity of the data it handles, internal auditors might use a high-level design audit, black-box test, or source code review to evaluate the application's security controls. These approaches also can be combined to result in a highly effective and comprehensive application security review.
For additional information about the application security audit process, auditors can visit the Open Web Application Security Project Web site. This site provides information to help organizations find and eliminate the causes leading to insecure software. The Web site also features a free comprehensive manual for designing, developing, and deploying secure Web applications. The OWASP Guide to Building Secure Web Application is found at www.owasp.org/index.php/OWASP_Guide_Project.