Technology

 

 

Transforming Assurancehttps://iaonline.theiia.org/2019/Pages/Transforming-Assurance.aspxTransforming Assurance<p>​The IIA's Core Principles for the Professional Practice of Internal Auditing use the term <em>risk-based assurance</em> instead of <em>reasonable assurance</em>, which implies that there are different levels of assurance based on multiple risk factors. That creates an opportunity for internal audit to move its work to a higher level by delivering enhanced assurance to the board and management. </p><p>Enhanced assurance does not imply reductions in risk. Instead, it refers to asking better questions about the risks that matter as well as the risks that should be automated for greater efficiency. It's about developing assurance at scale to cover the breadth of operations and strategic initiatives efficiently and cost-effectively.</p><p>Computerized fraud detection is one example of delivering assurance at scale. In 2002, WorldCom internal auditor Gene Morse discovered a $500 million debit in a property, plant, and equipment account by searching a custom data warehouse he had developed. Morse's mining of the company's financial reporting system ultimately uncovered a $1.7 billion capitalized line cost entry made in 2001, according to the <em>Journal of Accountancy</em>. </p><p>This example illustrates how fraud or intentional errors can occur in limited transactions with catastrophic outcomes. Enhanced assurance techniques such as data mining can uncover these transactions, which traditional audit techniques such as discovery, stratification, and random sampling may miss. Today's technologies can enable internal audit functions to automate their operations and provide enhanced assurance, but to do so, they must reframe their strategy. </p><h2>Better Teams</h2><p>Data analytics and audit automation platforms provide internal auditors with the means to build assurance at scale whether a novice or expert. The technologies also create the opportunity to form better teams. </p><p>Small, focused teams are more productive than large, consensus-driven teams directed from the top down, author Jacob Morgan notes. Writing in <em>Forbes</em>, Morgan cites Amazon CEO Jeff Bezos' "two-pizza" rule: "If a team cannot be fed by two pizzas, then that team is too large." Morgan says having more people on the team increases the communication needed and bureaucracy, which can slow the team down.</p><p>Collaboration with automation can modernize the performance of small teams. Intelligent automation can integrate oversight into operations, reduce human error, improve internal controls, and create situational awareness where risks need to be managed. Automation-enabled collaboration can help reduce redundancies in demands on IT departments, as well. However, efficiency transformations often fail when projects underestimate the impact of change on people. </p><h2>The Human Element</h2><p>Many of the biggest assurance risks are related to people, but too often the weakest link is related to auditing human behavior. The 2018 IBM X-Force Threat Intelligence Index finds "a historic 424% jump in breaches related to misconfigured cloud infrastructure, largely due to human error." IBM's report assumes decisions, big or small, contribute to risks. However, the vulnerabilities in human behavior and the intersection of technology represent a growing body of risks to be addressed. </p><p>Separate studies from IBM, the International Risk Management Institute, and the U.S. Department of Defense find that human error is a key contributor to operational risk across industry type and represents friction in organizational performance. The good news is automation creates an opportunity to reduce human error and to improve insights into operational performance. Chief audit executives (CAEs) can collaborate with the compliance, finance, operations, and risk management functions to develop automation that supports each of these key assurance providers and stakeholders. </p><h2>The Role of Technology</h2><p>Technology enables enhanced assurance by leveraging analy-tics to ask and answer complex questions about risk. Analytics is the key to finding new insights hidden within troves of unexplored data in enterprise resource planning systems, confidential databases, and operations. </p><p>Technology solutions that improve situational awareness in audit assurance are ideally the end goal. Situational awareness in auditing is not a one-size-fits-all approach. In some organizations, situational awareness involves improved data analysis; in others, it may include a range of continuous monitoring and reporting in near real-time. </p><p>Intelligent automation addresses issues with audit efficiency and quality. First, auditors spend, on average, half their time on routine processes that could be automated, improving consistency of data and reductions in error rates. Data governance allows other oversight groups to leverage internal audit's work, reducing redundancy of effort. </p><p>Second, smart automation leads to business intelligence. As more key processes are automated, they provide insights into changing conditions that may have been overlooked using periodic sampling techniques at points in time. </p><p>Most events are high frequency but low impact, yet auditors, IT staff, and risk and compliance professionals spend the bulk of their time chasing down these events. That leaves little time for them to focus on the real threats to the organization. Automation works best at solving high frequency events that are routine and add little value in terms of new information on known risks. Instead of focusing on the shape of risk, auditors will be able to drill down into the data to understand specific causes of risk.</p><h2>Steps to Enhanced Assurance</h2><p>Before buying automation, CAEs should answer three questions: How will automation improve audit assurance? How will automation make processes more efficient? How will auditors use it to improve audit judgment?</p><p>The CAE should consider automation an opportunity to raise awareness with the board and senior executives about enhanced assurance and better risk governance. To do so, internal audit must align enhanced assurance with the strategic objectives of senior executives. </p><p>To implement enhanced assurance in the internal audit function, CAEs should follow three steps:</p><p></p><ul><li>Identify the greatest opportunities to automate routine audit processes.</li><li>Prioritize automation projects during each budget cycle in coordination with the operations, risk management, IT, and compliance functions. </li><li>Consider the questions most important to senior executives: Which risks pose the greatest threat to the organization's goals? How well do we understand risk uncertainties across the organization? Do existing controls address the risks that really matter?</li></ul><h2>Assurance and Transformation</h2><p>The World Economic Forum calls today's digital transformation the fourth Industrial Revolution and forecasts that it could generate $100 trillion for business and society by 2025. Every business revolution has been disruptive, and this one will be no exception. The difference in outcomes will depend largely on how well organizations respond to change.</p><p>Forward-looking internal audit departments already are delivering enhanced assurance by strategically focusing on the roles people, technology, and automation play in creating higher confidence in assurance. Other audit functions are in the early stage of transformation. Although these audit functions will make mistakes along the way, now is the time for them to build new data analysis and data mining skills, and to learn the strengths and weaknesses of automation. As these tools become more powerful and easy to use, enhanced assurance will set a new high bar in risk governance. </p>James Bone1
Stronger Assurance Through Machine Learninghttps://iaonline.theiia.org/2019/Pages/Stronger-Assurance-Through-Machine-Learning.aspxStronger Assurance Through Machine Learning<p>​By now, most internal audit functions have likely implemented rule-based analytics capabilities to evaluate controls or identify data irregularities. While these tools have served the profession well, providing useful insights and enhanced stakeholder assurance, emerging technologies can deliver even greater value and increase audit effectiveness. With the proliferation of digitization and wealth of data generated by modern business processes, now is an opportune time to extend beyond our well-worn approaches.</p><p>In particular, machine learning (ML) algorithms represent a natural evolution beyond rule-based analysis. Internal audit functions that incorporate ML beyond their existing toolkit can expect to develop new capabilities to predict potential outcomes, identify patterns within data, and generate insight difficult to achieve through rudimentary data analysis. Those looking to get started should first understand common ML concepts, how ML can be applied to audit work, and the challenges likely to arise along the way. </p><h2>What Is Machine Learning?</h2><p>ML is a branch of artificial intelligence (AI) featuring algorithms that learn from past patterns and examples to perform a specific task. How does an ML algorithm "learn," and how is this different from rule-based systems? Rule-based systems generate an outcome by evaluating specific conditions — for example, "If it is raining, carry an umbrella." These systems can be automated — such as through the use of robotic process automation — but they are still considered "dumb" and incapable of processing inputs unless provided explicit instructions.</p><p>By contrast, an ML model generates probable outcomes for "Should I carry an umbrella?" by taking into account inputs such as temperature, humidity, and wind and combining these with data on prior outcomes from when it rained and when it did not. Machine learning can even consider the user's schedule for the day to determine if he or she will likely be outdoors when rain is predicted. With ML models, the best predictor of future behavior is past behavior. Such systems can generate useful real-world insights and predictions by inferring from past examples. </p><p>As an analogy, most people who have built objects using a Lego set, such as a car, follow a series of rules — a step-by-step instruction manual included with the construction toys. After building the same Lego car many times, even without written instructions, an individual would acquire a reasonable sense of how to build a similar car given the Lego parts. Likewise, an ML algorithm with sufficient training — prior practice assembling the Lego car — can provide useful outcomes (build the same car) and identify patterns (relationships between the Lego parts) given an unknown set of inputs (previously unseen Lego parts) even without instructions. </p><h2>Common Concepts</h2><p>The outcomes and accuracy of ML algorithms are highly dependent on the inputs provided to them. A conceptual grasp of ML processes hinges on understanding these inputs and how they impact algorithm effectiveness.</p><p> <strong>Feature</strong> Put simply, a feature is an input to a model. In an Excel table populated with data, one data column represents a single feature. The number of features, also referred to as the dimensionality of the data, varies depending on the problem and can range up to the hundreds. If a model is developed to predict the weather, data such as temperature, pressure, humidity, types of clouds, and wind conditions comprise the model's features. ML algorithms are well-suited to such multidimensional analysis of data.</p><p> <strong>Feature Engineering</strong> In a rule-based system, an expert will create rules to determine the outcome. In an ML model, an expert selects the specific features from which the model will learn. This selection process is known as feature engineering, and it represents an important step toward increasing the algorithm's precision and efficiency. The expert also can refine the selection of inputs by comparing the outcomes of different input combinations. Effective feature engineering should reduce the number of features within the training data to just those that are important. This process will allow the model to generalize better, with fewer assumptions and reduced bias.</p><p> <strong>Label</strong> An ML model can be trained using past outcomes from historical data. These outcomes are identified as labels. For instance, in a weather prediction model, one of the labels for a historical input date might be "rained with high humidity." The ML model will then know that it rained in the past, based on the various temperature, pressure, humidity, cloud, and wind conditions on a particular day, and it will use this as a data point to help predict the future.</p><p> <strong>Ensemble Learning</strong> One common way to improve model accuracy is to incorporate the results of multiple algorithms. This "ensemble model" combines the predicted outcomes from the selected algorithms and calculates the final outcome using the relative weight assigned to each one.</p><p> <strong>Learning Categories</strong> The way in which an ML algorithm learns can generally be separated into two broad categories — supervised and unsupervised. Which type might work best depends on the problem at hand and the availability of labels. </p><ul><li>A <em>supervised learning</em> algorithm learns by analyzing defined features and labels in what is commonly called the training dataset. By analyzing the training dataset, the model learns the relationship between the defined features and past outcomes (labels). The resulting supervised learning model can then be applied to new datasets to obtain predicted results. To assess its precision, the algorithm will be used to predict the outcomes from a testing dataset that is distinct from the training dataset. Based on the results of this training and testing regime, the model can be fine-tuned through feature engineering until it achieves an acceptable level of accuracy. <br><br></li><li>Unlike supervised learning, <em>unsupervised learning</em> algorithms do not have past outcomes from which to learn. Instead, an unsupervised learning algorithm tries to group inputs according to the similarities, patterns, and differences in their features without the assistance of labels. Unsupervised learning can be useful when labeled data is expensive or unavailable; it is effective at identifying patterns and outliers in multidimensional data that, to a person, may not be obvious. </li></ul><h2>Stronger Assurance</h2><p> <img src="/2019/PublishingImages/Lee-overview-of-ML-payment-analytics.jpg" class="ms-rtePosition-2" alt="" style="margin:5px;width:600px;height:305px;" />An ML model's capacity to provide stronger assurance, compared to rule-based analysis, can be illustrated using an example of the technology's ability to identify anomalies in payment transactions. "Overview of ML Payment Analytics" (right) shows the phases of this process.</p><p>Developing an ML model to analyze payment transactions will first require access to diverse data sources, such as historical payment transactions for the last three years, details of external risk events (e.g., fraudulent payments), human resource (HR) data (e.g., terminations and staff movements), and details of payment counterparties. Before feature engineering work can start, the data needs to be combined and then reviewed to verify it is free of errors — commonly called the extract, transform, and load phase. During this phase, data is extracted from various source systems, converted (transformed) into a format that can be analyzed, and stored (loaded) in a data warehouse.</p><p>Next, the user performs feature engineering to shortlist the critical features — such as payment date, counterparty, and amount — the model will analyze. To refine the results, specific risk weights, ranging from 0 to 1, are assigned to each feature based on its relative importance. From experience, a real-world payment analytics model may use more than 150 features. The ability to perform such multidimensional analysis of features represents a key reason to use ML algorithms instead of simple rule-based systems.</p><p>To begin the analysis, internal auditors could apply an unsupervised learning algorithm that identifies payment patterns to specific counterparties, potentially fraudulent transactions, or payments with unusual attributes that warrant attention. The algorithm performs its analysis by identifying the combination of features that fit most payments and producing an anomaly score for each payment, depending on how its features differ from all others. It then derives a risk score for each payment from the risk weight and the anomaly score. This risk score indicates the probability of an irregular payment. </p><p>"Payment Outliers" (below right) illustrates a simple model using only three features, with two transactions identified as outliers. The unsupervised learning model generates a set of potential payment exceptions. These exceptions are followed up to determine if they are true or false. The results can then be used as labels to incorporate supervised learning into the ML model, enabling identification of improper payments with a significantly higher degree of precision. </p><p>Supervised learning models can also be used to predict the likelihood of specific outcomes. By training an algorithm using labels on historical payment errors, the model can help identify potential errors before they occur. For example, based on past events a model may learn that the frequency of erroneous payments is highly correlated with specific features, such as high frequency of payment, specific time of day, or staff attrition rates. A supervised learning model trained with these labels can be applied to future payments to provide an early warning for potential payment errors.</p><p>This anomaly detection model can be applied to datasets with clear groups, though it should not contain significant transactions that differ greatly from most of the data. For instance, the model can be extended to detect irregularities in almost any area, including expenses, procurement, and access granted to employees. </p><h2>Deeper Insights</h2><p> <img src="/2019/PublishingImages/Lee-payment-outliers.jpg" class="ms-rtePosition-2" alt="" style="margin:5px;width:540px;height:511px;" />Continuing with the payment example, an ML model developed to analyze payment transactions can be used to uncover hidden patterns or unknown insights. Examples include: </p><ul><li>Identify overpayment for services by comparing the mean and typical variance in payment amounts for each product type — such as air tickets or IT services — and highlighting all payments that deviate significantly from the mean.<br><br> </li><li>Identify prior unknown emerging needs — such as different departments paying for a new service at significantly different prices — or client types by highlighting payment outliers. This insight could allow executives to optimize the cost for acquired products and services. <br><br></li><li>Identify multiple consecutive payments to a single counterparty below a specific threshold. This analysis would help identify suspicious payments that have been split into smaller ones to potentially escape detection. <br><br></li><li>Identify potential favoritism shown to specific vendors by pinpointing significant groups of payments made to these vendors or related entities. </li></ul><h2>Key Challenges</h2><p>Internal auditors are likely to encounter numerous challenges when applying ML technology. Input quality, biases and poor performance, and lack of experience with the technology are among the most common.</p><p> <strong>Availability of Clean, Labeled Data</strong> For any ML algorithm to provide meaningful results, a significant amount of high-quality data must be available for analysis. For instance, developing an effective payment anomaly detection model requires at least a year of transactional, HR, and counterparty information. Data cleansing, which involves correcting and removing erroneous or inaccurate input data, is often required before the algorithm can be trained effectively. Experience shows that data exploration and data preparation often consume the greatest amount of time in ML projects. Biases in the training data that are not representative of the actual environment will adversely impact the model's output. Also, without good labels — such as labels on actual cyber intrusions — and feature engineering, a supervised learning model will be biased toward certain outcomes and may generate noisy, or meaningless, results.</p><p> <strong>Poor Model Performance and Biases</strong> Most internal audit functions that embark on ML projects will initially receive disappointing or inaccurate results from at least some of their models. Potential sources of failure may include trained models that do not generalize well, poor feature engineering, use of algorithms that are ill-suited to the underlying data, or scarcity of good quality data. </p><p>Overfitting is another potential cause of poor model performance — and one that data scientists encounter often. An ML model that overfits generates outcomes that are biased toward the training dataset. To reduce such biases, internal audit functions use testing data independent of the training dataset to validate the model's accuracy. </p><p>Auditors should also be cognizant of each algorithm's inherent limitations. For example, unsupervised learning algorithms may produce noisy results if the data elements are unrelated and have few or no common characteristics (i.e., no natural groups). Some algorithms work well with inputs that are relatively independent of one another but would be poor predictors otherwise.</p><p> <strong>Lack of Experience</strong> Organizations new to ML may not have examples of successful ML projects to learn from. Inexperienced practitioners can acquire confidence in their fledging capabilities by first applying simple ML models to achieve better outcomes from existing solutions. After these initial successes, algorithms to improve the outcomes of these models can be progressively implemented in stages. For instance, an ensemble learning approach can be used to improve on the first model. If successful, more advanced ML methods should then be considered. This progressive approach can also alleviate the initial skepticism often present in the adoption of new technology.</p><h2>The Future of Audit</h2><p>Machine learning technology holds great promise for internal audit practitioners. Its adoption enables audit functions to provide continuous assurance by enhancing their automated detection capabilities and achieving 100% coverage of risk areas — a potential game changer for the audit profession. The internal audit function of the future is likely to be a data-driven enterprise that augments its capabilities through automation and machine intelligence. <br></p>Ying-Choong Lee1
New U.S. Security Agency's Statement of Intenthttps://iaonline.theiia.org/2019/Pages/New-US-Security-Agencys-Statement-of-Intent.aspxNew U.S. Security Agency's Statement of Intent<p>​The U.S. federal government's new Cybersecurity and Infrastructure Security Agency (CISA) aims to be the nation's risk advisor, according to a <a href="https://www.dhs.gov/sites/default/files/publications/cisa_strategic_intent_s508c.pdf" target="_blank">strategic intent document</a> (PDF) released this month. The CISA was established within the Department of Homeland Security in 2018 to address threats to U.S. technology and physical infrastructure.</p><p>The CISA's mission is to "lead the national effort to understand and manage cyber and physical risk to our critical infrastructure," the document notes. "The 21st century brings with it an array of challenges that are often difficult to grasp and even more difficult to address," CISA Director Christopher Krebs writes in the document. He cites risk factors such as the nation's reliance on networked technologies, nature-based threats, and technology failures. </p><p>To that end, the CISA's guiding principles are:</p><ul><li>Leadership and collaboration with infrastructure and security partners.</li><li>Risk prioritization to secure "national critical" functions underlying national security, economic security, public health and safety, and the continuity of government operations.</li><li>Results oriented to reduce risk, respond to partners' requirements, and work toward common outcomes. </li><li>Respect for national values such as civil liberties, free expression, commerce, and innovation.</li><li>Unified mission and agency to address risks in a coordinated, cross-agency manner.</li></ul><p><br></p><p>The document's subtitle, "defend today, secure tomorrow," lays out the agency's twin goals. By defend today, the CISA seeks to defend against urgent threats and hazards. The objectives are to prevent or mitigate most significant threats to federal government networks and critical infrastructure, mitigate the impact of "all-hazards" events, ensure incident response communication, and mitigate significant supply chain and emerging threats.</p><p>The secure tomorrow goal is about strengthening critical infrastructure and addressing long-term risks. The aim is to identify and manage risks to critical infrastructure, as well as to provide technical assistance.</p><p>The CISA seeks to achieve these goals through risk analysis, risk management planning, information sharing, capacity building, and incident response. Resources for delivering these services include:</p><ul><li>Analysts, risk models, and technical alerts.</li><li>Collaborative planning teams and task forces.</li><li>Policy and governance actions.</li><li>Technical assistance teams and security advisors.</li><li>Deployed tools and sensors.</li><li>Grants and operational contracts.</li><li>Exercises and training.</li></ul><p><br></p><p>The strategic intent document lays out Krebs' priorities for the agency:</p><ul><li>China, supply chain, and 5G technologies.</li><li>Election security.</li><li>Soft target security such as for crowded places.</li><li>Federal agency cybersecurity.</li><li>Industrial control systems such as transportation systems, telecommunication networks, industrial manufacturing plants, electric power generators, oil and natural gas pipelines, and the Internet of Things.</li></ul><p><br></p><p>Among the CISA's operations are the National Risk Management Center and the National Cybersecurity and Communications Integration Center, which provides incident response capabilities to all levels of government as well as the private sector. </p>Tim McCollum0
Wrangling the Internet of Thingshttps://iaonline.theiia.org/2019/Pages/Wrangling-the-Internet-of-Things.aspxWrangling the Internet of Things<p>​The Internet of Things (IoT) allows businesses to connect everything from the office printer to factory production lines via Wi-fi, making it an ideal tool for organizations to exploit, and for employees to use effectively. And there appears to be no limit to what IoT technology is capable of delivering. </p><p>Because of how simple it is to install and use the associated software and applications on people’s smartphones and tablets, technology heavyweights like Cisco Systems and IT analysts such as Juniper Research estimate that the number of connected IoT devices will reach 50 billion worldwide in 2020. According to research by Forrester, businesses will lead the surge in IoT adoption this year, with 85% of large companies implementing IoT or planning deployments. </p><p>But such connectivity comes at a price. As IoT usage increases, so too do the associated risks. Simple devices rely on simple security, and simple protocols can be simply ignored. </p><p>A common problem is employees simply adding devices to the network, without informing the IT department — or without the IT team noticing. For example, Raef Meeuwisse, a UK-based cybersecurity consultant and information systems auditor, says that one security technology company revealed that when installing network security detection in new customer networks, it found that up to 40% of devices logged on to the network were IoT. “That was a surprise to those organizations’ executives and their IT departments,” he says.</p><p>Such anecdotes mean internal audit has a real job at hand to ensure that IoT deployments go smoothly and that the associated benefits are delivered. And the task is fraught with danger: The technology is still evolving, new risks are emerging, and controls to mitigate these risks often seem to be a step behind what is actually happening in the workplace.</p><h2>Warning Signs<br></h2><p>Information experts and standards-setters such as ISACA point out that because IoT has no universally accepted definition, there aren’t any universally accepted standards for quality, safety, or durability, nor any universally accepted audit or assurance programs. Indeed, IoT comes with warning notices writ large. According to ISACA’s State of Cybersecurity 2019 report, only one-third of respondents are highly confident in their cybersecurity team’s ability to detect and respond to current cyberthreats, including IoT usage — a worrying statistic given the proliferation of IoT devices. Industry experts and hackers have demonstrated how easy it is to target IoT-enabled office security surveillance systems and turn them into spy cameras to access passwords and confidential and sensitive information on employees’ computer screens (see “Targeting the IoT Within” below for examples of other IoT vulnerabilities). </p><p>Distributed denial of service attacks (DDoS) on IoT devices — which analysts and IT experts deem the most likely type of threat — are the best example of IoT device security and governance flaws. In 2016, the Mirai cyberattack on servers at Dyn, a company that controls much of the internet’s domain-name infrastructure, temporarily stalled several high-profile websites and online services, including CNN, Netflix, Reddit, and Twitter. Unique in that case was that the outages were caused by a DDoS attack largely made up of multiple, small IoT devices such as TVs and home entertainment consoles, rather than via computers infected with malware. These devices shared a common vulnerability: They each had a built-in username and password that could be used to install the malware and re-task it for other purposes. The attack was the most powerful of its type and involved hundreds of thousands of hijacked devices. </p><p>“As is often the case with new innovations, the use of IoT technology has moved more quickly than the mechanisms available to safeguard devices and their users,” says Amit Sinha, executive vice president of engineering and cloud operations at cloud security firm Zscaler in San Jose, Calif. “Enterprises need to take steps to safeguard these devices from malware attacks and other outside threats.”</p><h2>Begin With Security</h2><p>Events like the Mirai attack make security a priority for internal auditors to review. Among the top IoT security concerns that experts identify are weak default and password credentials, failure to install readily available security patches, loss of devices, and failure to delete data before using a new or replacement device. The steps to rectify such problems are relatively simple, but they are “usually ignored or forgotten about,” says Colin Robbins, managing security consultant at Nottingham, U.K.-based cybersecurity specialist Nexor. </p><p>As a starter, he says, internal auditors should check that the business has a process to ensure that all IoT device passwords are unique and cannot be reset to any universal factory default value to minimize the risk of hacking. The organization should update software and vulnerability patches regularly, and devices that cannot be updated — because of age, model, or operating system — should be isolated once personal and work data has been removed from them.</p><p>“Organizations need to have conversations at the highest level of management about what IoT means to the business,” says Deral Heiland, IoT research lead at Boston-based cybersecurity firm Rapid7. Once they have done this, Heiland suggests they focus on detailed processes around security and ask key questions such as: What IoT has the organization currently deployed? Who owns it? How does the organization manage patches for these technologies, and how does it monitor for intrusions? What processes does the organization need for deploying new technologies?</p><p>Technical Hygiene Standards Effective IoT security requires organ-izations to develop their own protocols and security specifications up front, Meeuwisse says. This ensures that “devices can either be integrated into particular security zones or quarantined and excluded from the possibility of getting close to anything of potential value,” he explains. </p><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p>​<strong>Targeting the IoT Within</strong><br></p><p>In January 2017, the U.S. Food and Drug Administration issued a statement warning that certain kinds of implantable cardiac devices, such as pacemakers and defibrillators, could be accessed by malicious hackers. Designed to send patient information to physicians working remotely, the devices connect wirelessly to a hub in the patient’s home, which in turn connects to the internet over standard landline or wireless connections. Unfortunately, technicians found that certain transmitters in the hub device were open to intrusions and exploits. In a worst-case scenario, hackers could manipulate the virtual controls and trigger incorrect shocks and pulses, or even just deplete the device’s battery. Manufacturers quickly developed and deployed a software patch. </p><p>The case demonstrates the need for internal audit to check that Wi-fi networks are secure, that default factory settings on any connected devices are not used, and that the organization,  through the IT department, has patch management processes in place to check whether any devices have security updates that need to be installed.<br></p></td></tr></tbody></table><p>Meeuwisse adds that whether a business is manufacturing or simply installing IoT devices, having security architecture standards to ensure information security throughout the organization is aligned with business goals is a crucial first step. “Buying or designing technology before having a clear understanding of the security specification required is a dangerous path,” he says. “For any new type of IoT device, there should always be a risk assessment process in place to understand whether the device meets security requirements, needs more intensive scrutiny, or poses a significant potential risk.”</p><p>More widely, organizations need to examine “the basics” to ensure that they maintain their IT system’s “technical hygiene,” says Corbin Del Carlo, director, internal audit IT and infrastructure at financial services firm Discover Financial Services in Riverwoods, Ill. For example, Wi-fi access should be closed so only authorized and certified devices can use it, and there should be an inventory of devices that are connected to the network so the IT department knows who is using them. For additional security, IT should scan the network routinely — even daily — to check whether new devices have been added to the network and whether they have been approved. </p><p>Del Carlo also says internal auditors need to check that the organization’s IT architecture can support a potentially massive scale-up of devices wanting to access its systems and network quickly. “We’re talking about millions more devices all coming online within a year or two,” he says. “Can your IT system cope with that kind of increase in demand? What assurance do you have that the system won’t fail?”</p><p>Del Carlo recommends organizations draw up a shortlist of device manufacturers that are deemed secure enough and compatible with their IT architecture. “If you allow devices from any manufacturer to access the network, then you need the in-house capability to monitor the security of potentially hundreds of different makes and find security patches for them all, which can be very time-consuming,” he points out.<br></p><p>A list of approved manufacturers also can make it easier to audit whether the devices have the latest versions of security downloads. “Even if a particular manufacturer’s product proves to have vulnerabilities, it is much easier to fix the problem for all those devices than try to constantly monitor whether there are security updates for many different products made by dozens of manufacturers,” he says.</p><h2>Intrusive Monitoring</h2><p>It’s not only the organization’s security that internal auditors should consider. Auditors also should make management aware of potential privacy issues that some applications may present — especially those that feature GPS tracking, cameras, and voice recorders. “Tracking where employees are can be useful for delivery drivers, but is it necessary to track employees who are office-based?” Del Carlo asks. </p><p>An example is an IoT app that monitors how much time people spend at their desks and prompts them to take a break if they are there too long. Organizations could use that technology to monitor how frequently people are not at their desks, Del Carlo notes. “While this may catch out those who take extended lunch breaks, it may also highlight those who have to take frequent trips to the bathroom for medical conditions that they may wish to keep private,” he explains. “As a result, auditors should query such device usage.”</p><h2>Business Risks</h2><p>Yet while there is a vital need to make IoT security a priority, Robbins says organizations should not overlook whether management has appropriately scoped the business case for an IoT deployment, and how success or failure can be judged. “As with any other project, particularly around IT, managers can throw money at something they do not understand just because they think they need it, or because everyone else is using it,” he says. </p><p>Robbins cautions that poorly implemented IoT solutions create new vulnerabilities for businesses. “With IoT, it’s not data that is at risk, but business processes at the heart of a company,” he points out. “If these processes fail, it could lead to a direct impact on cost or revenue.”</p><p>According to Robbins, the success of IoT means a heavy — and “almost blind” — reliance on the rest of the “things” that support the technology working effectively within the supply chain. Take for example an IoT device that monitors bakery products made in an oven. That device may tell the operator that the oven temperature is 200 degrees and the baked goods have another 20 minutes of cooking time, he explains. </p><p>“But the problem is that you have no physical way of checking, or even being alerted, that the technology might be wrong or has been hacked, and that the settings and readings are incorrect,” Robbins says. “Everyone is relying on all the different parts of the supply chain — the app vendor, the cloud provider, and so on — maintaining security in a world where there are no agreed-upon standards or best practice. Talk about ‘blind faith.’” </p><p>IoT also increases the need for additional third-party and vendor risk monitoring, Del Carlo warns. This is because app developers not only may be collecting data from users to help inform design improvements but also to generate sales leads. </p><p>“Internal auditors need to think about the data that these vendors might be getting and how they may be using it,” Del Carlo explains. For example, developers may be exploiting user data to approach the organization’s competitors with products tailored to the competitor’s needs. “Internal auditors need to check what data developers may be collecting and why,” he advises.</p><h2>Early Best Practices</h2><p>Despite the absence of universally agreed-upon guidance for aligning IoT usage with business needs, some industry bodies have tried to promote what they consider to be either basic steps or best practice. For instance, in a series of blog posts, ISACA recommends that organizations perform pre-audit planning when considering investing in IoT solutions. It advises organizations to think about how the devices will be used from a business perspective, what business processes will be supported, and what business value is expected to be generated. ISACA also suggests that internal auditors question whether the organization has evaluated all risk scenarios and compared them to anticipated business value.</p><p>Eric Lovell, practice director for internal audit technology solutions at PwC in Charlotte, N.C., says internal audit should have a strong role in ensuring that IoT risks are understood and controlled, and that the technology is aligned to help achieve the organization’s business strategy. “Internal audit should ask a lot of questions about how the organization uses IoT, and whether it has a clear strategic vision about how it can use the technology and leverage the benefits from it,” he says.</p><p>As IoT is part of the business strategy, Lovell says internal auditors need to assess the business case for it. “Internal auditors need to ask management about the business benefits it sees from using IoT, such as improving worker safety, better managing assets, or generating customer insights, and how these benefits are going to be measured and assessed to ensure that they have been realized,” he advises.</p><p>Questions to ask include: What metrics does the organization have in place to gauge success or failure? Are these metrics in line with industry best practice? Are there stage gates in place that would allow the organization to check progress at various points and make changes to the scope or needs of the project? “Equally importantly, does the organization have the right people with the necessary skills, experience, and expertise to check that the technology is delivering its stated aims and is being used securely?” Lovell notes.</p><p>Lovell also says internal auditors need a seat at the table from the beginning when the organization embarks on an IoT strategy. “Like with any other project, internal audit will have less influence and input if the function joins the discussion after the project has already been planned, scoped, and started,” he explains. “Internal auditors need to make sure that they are part of those early discussions to gauge management’s strategic thinking and their level of awareness of the possible risks and necessary controls and procedures.”</p><h2>IoT’s Dynamic Risks</h2><p>Risks shift over time as technology innovations and the business and regulatory environment evolve. “It is pointless to think that the risks that you have identified with IoT technologies at the start of the implementation process will remain the same a couple of years down the line,” Lovell says. “Internal auditors need to constantly review how IoT is being used — and under what circumstances and by whom — and assess whether the technology is still fit for purpose to meet the needs of the business.” <br></p>Neil Hodge1
A Change in Mindsethttps://iaonline.theiia.org/2019/Pages/A-Change-in-Mindset.aspxA Change in Mindset<h3>​How far have audit functions come in terms of data analytics usage?</h3><p><strong>Petersen</strong> Progressing audit analytics is a journey that doesn’t have an end, but I’m excited to hear organizations describe how they continue to progress year over year. These organizations know the direction they need to go, continue to raise the bar for themselves, and set new objectives to achieve. They face the same resource limitations many audit teams do, so they encourage all their auditors to progress, not just those assigned as the data analytics expert. </p><p><strong>Zitting</strong> Not far enough. Recently, my company’s State of the GRC Profession survey revealed 43% of professionals want to grow their data analysis skills, but those figures have been the same for years — if not decades. Leading audit teams that are willing to embrace change and take risks are indeed creating a new future by delivering and sharing successes in data analysis, advanced analytics, robotic process automation, and even machine learning/artificial intelligence; unfortunately, these leaders are the exception. They inspire us, yet other corporate functions like marketing, IT/digital transformation, security, and even risk management are leaving internal audit behind. <br></p><h3>What are examples, beyond typical usages, of analytics that auditors should be undertaking?<strong style="color:#666666;font-family:arial, helvetica, sans-serif;font-size:12px;"> </strong></h3><p><strong><img src="/2019/PublishingImages/Dan-Zitting.jpg" class="ms-rtePosition-1" alt="" style="margin:5px;" />Zitting</strong> Let’s not write off the “typical usages” of data analytics, because the vast majority of audit teams aren’t even doing those. The key control areas that virtually every organization’s audit and internal control teams test are completely automatable, yet few seem to do it. Areas like user access, IT administrator activity (or other activity log testing), journal entry, payment, and payroll should never again be tested with anything but data analytics.</p><p>Beyond that, the universe of possibility for the data-savvy audit team is limitless. I’m seeing leading audit teams even turn analytics in on themselves — like doing textual analytics on the text of the past several years’ audit findings to indicate where risk is increasing or not being addressed. It’s incredibly impactful. I’ve also seen practitioners develop analytics that use machine learning to create “hot clusters” of employees that are at high risk of churn, or to see “hot clusters” of payments that could be bribes, money laundering, or sanction violations. </p><p><strong>Petersen</strong> How about running data analysis on the audit analytics program? Start by ascertaining how many audits contain some level of data analysis — sampling doesn’t count. Now compare that to how many should contain some analysis. I don’t know of any organizations that would find they should be doing analytics on 100% of their audits, but if they are honest, they’ll find a significant gap between those audits that could have some analytics performed and those that do.</p><p>Now that we have determined breadth of coverage, let’s determine depth of coverage. This is done by determining for each of those audits that could have analytics performed on them, the analytics that would ideally be performed. Internal audit should focus on those analytics it would be proud to report to the audit committee that it performed considering the risks and audit objective. Don’t be discouraged by the thought that internal audit can never achieve the coverage it has identified. Instead, plan to increase coverage each year.<br></p><h3>How can small audit functions that can’t afford a data scientist jump into data analytics?<br></h3><p><strong><img src="/2019/PublishingImages/Ken-Petersen.jpg" class="ms-rtePosition-1" alt="" style="margin:5px;" />Petersen</strong> Start with basic analytics functions. Audit leadership needs to lead the organization to continually progress the analytics being performed. Leverage those individuals in your organization that have an aptitude for analytics and communicate within the team successes, new ideas, and new ways of doing things. Use known tools such as Excel and easy-to-use and learn audit analytics tools. Leverage existing audit techniques across different types of audits. For example, testing for duplicate payments, separation of duties violations, and several other routines apply across many types of audits. Once you’ve determined how to identify these in one audit, this can be applied to other audits. Teams without a data scientist can still have a strong audit analytics program.</p><p><strong>Zitting</strong> Every audit function that can hire a single auditor can afford a person with data skills. The problem is that we accept the status quo of the short-term demands of internal audit’s stakeholders; thus, we elect to hire a “traditional” auditor over a person with technical data skills and the ability to think critically. Obviously, that is a necessity in real life, but also it illustrates that the “can’t afford” or “can’t find the skills” arguments are basically bad excuses that abdicate our responsibility as corporate leaders to evolve with the economic demands of the modern environment. Consider a complete shift in mindset. What if we were building a small data science team that had some audit skills instead of a small audit team with some data skills? Wouldn’t that change our perspective on staffing for a truly modern form of auditing?<br></p><h3>What skills should audit functions be looking for when hiring a data analytics expert?</h3><p><strong>Zitting</strong> Most importantly, audit functions should be looking for critical thinking skills. Technical skills in data analytics can be taught. What is difficult to teach is critical thinking, particularly as it relates to knowledge of audit process/risk assessment/internal control, knowledge of the business and its strategy/operations, and the ability to navigate corporate access challenges — access to data and executive time — by asking really smart questions. Next, look for an understanding and desire to work in an Agile mindset. Specific tools and approaches will always change, but if the candidate understands Agile methodology — minimum viable product, sprints and iteration, continuous improvement — he or she will be able to deliver business results in both the short and long term regardless of issues of tool preference. </p><p><strong>Petersen</strong> Communication and collaboration skills can exponentially increase the team’s analytics effectiveness. Without these skills, there is one expert off doing analytics by him or herself. However, with these skills and easy-to-use analytics tools, the expert can guide the entire team through its analytics needs, greatly increasing the overall effectiveness of the team. When not providing this guidance, the expert can work on more complex analytical projects. This approach also increases employee satisfaction of both the expert and the other team members.<br></p><h3>What does a best-in-class audit function that is fully embedded in data analytics look like?</h3><p><strong>Petersen</strong> These teams apply a quantitative analysis and measurement to their audit analytics. They do this by measuring the depth and breadth of their analytics coverage. They have strong leaders who promote the value of analytics and make it a part of the team’s culture. They also understand that there is no finish line, but the analytics program will continually evolve and grow. Leaders of these teams incorporate all team members into the analytics process, understanding that some have a stronger aptitude for it than others, but still expecting all to participate, and they set appropriate analytics goals for each. Not only are organizations like this best-in-class with respect to the analytics functions but, as a surprise to some, they also have happier team members.</p><p><strong>Zitting</strong> The best audit organizations already are demonstrating that their core skill is data analysis. It’s the only way to get large-scale insight on risk, control, and assurance across globally dispersed organizations using constrained resources. Best-in-class audit functions don’t embed data analytics, they provide 90% of all assurance they report through analytics and reserve “traditional” auditing for manual deep dives into areas of significant risk or deviation from policy, regulation, or other standards of control. For example, one of our clients moved its entire internal audit team into the core business operation and began rebuilding internal audit from scratch in the last two years. This was because audit was providing so much value via its complete focus on data and analytics, the business demanded to consume the function, and the audit committee agreed to rebuild. That’s one example of internal audit driving real value through a data-centric mindset and practice.<br></p>Staff1
Editor's Note: Fortress in the Cloudhttps://iaonline.theiia.org/2019/Pages/Editors-Note-Fortress-in-the-Cloud.aspxEditor's Note: Fortress in the Cloud<p>​Cloud computing has quickly soared to become a dominant business technology. Public cloud adoption, in fact, now stands at 91% among organizations, according to software company Flexera's State of the Cloud Survey. And it's only expected to grow from there. Analysts at Gartner say more than half of global enterprises already using the cloud will have gone all-in by 2021. </p><p>Collectively, that places a lot of responsibility for organizational data outside the enterprise. And while cloud migration can lead to significant efficiencies and cost savings, the potential risks of third-party data management cannot be ignored. Reuters, for example, recently reported that several large cloud providers were affected by a series of cyber intrusions suspected to originate in China. Victims, Reuters reports, include Computer Sciences Corp., Fujitsu, IBM, and Tata Consultancy Services. The news agency's chilling quote from Mike Rogers, former director of the U.S. National Security Agency, emphasizes the gravity of these breaches: "For those that thought the cloud was a panacea, I would say you haven't been paying attention." </p><p>As noted in this issue's cover story, <a href="/2019/Pages/Security-in-the-Cloud.aspx">"Security in the Cloud,"</a> growing use of cloud services creates new challenges for internal auditors. Writer Arthur Piper, for example, points to issues arising from the cloud's unique infrastructure and the "lack of visibility of fourth- and fifth-level suppliers." He also cites the cloud's opaque nature and rapid pace of development as potential areas of difficulty. Addressing these issues, he says, requires internal audit to work with a wide range of business stakeholders — especially those in IT — and to secure staff with the right type of expertise.</p><p>The need to focus on these areas is supported by a recent report from the Internal Audit Foundation, Internal Auditors' Response to Disruptive Innovation. Among practitioners surveyed for the research, a consistent theme emerged with regard to cloud computing — to be successful, internal audit should build relationships with IT, before moving to the cloud. Multiple respondents also recommend bringing in personnel with specialized IT skills to facilitate the evaluation of cloud controls. Moreover, they noted the importance of evaluating not only standard internal controls in areas like data security and privacy, but soft controls, such as institutional knowledge, as well.</p><p>Of course, cloud computing is only the tip of the iceberg when it comes to challenges around disruptive technology. Among other IT innovations affecting practitioners, artificial intelligence and the Internet of Things are equally impactful. We examine each of these areas in <a href="/2019/Pages/Stronger-Assurance-Through-Machine-Learning.aspx">"Stronger Assurance Through Machine Learning"</a> and <a href="/2019/Pages/Wrangling-the-Internet-of-Things.aspx">"Wrangling the Internet of Things,"</a> respectively. And be sure to visit the <a href="/_layouts/15/FIXUPREDIRECT.ASPX?WebId=85b83afb-e83f-45b9-8ef5-505e3b5d1501&TermSetId=2a58f91d-9a68-446d-bcc3-92c79740a123&TermId=75ea3310-ffa9-45b9-9280-c69105326f09">Technology section</a> of our website, InternalAuditor.org, for insights and perspectives on other IT-related developments affecting the profession.</p>David Salierno0
Security in the Cloudhttps://iaonline.theiia.org/2019/Pages/Security-in-the-Cloud.aspxSecurity in the Cloud<p>​Although Jean-Michel Garcia-Alvarez was used to working as a high-level internal auditor in the financial services sector, 2015 presented him several novel challenges. First, he was appointed head of internal audit — and later also data protection officer — at a new, fintech challenger bank in London called OakNorth. It had received regulatory approval from both the Prudential Regulatory Authority and the Financial Conduct Authority in August 2015 — one of only three U.K. banks to do so in the past 150 years. Second, OakNorth wanted to be the first U.K. bank with a cloud-only IT infrastructure, which was not an area he specialized in during his previous audit roles at Nationwide Building Society, RBS, or Barclays.</p><p>Garcia-Alvarez realized that traditional audit skills would be of limited use because of the cloud’s newness and evolving nature, with little precedent in the scope and range of how to approach it as an internal auditor. So, he decided to obtain an IT audit certificate from the U.K.’s Chartered Institute of Internal Auditors (CIIA). It boosted his IT audit skills and forced him to get to grips with how to approach cloud auditing and security. It also made him a credible security player in the business.</p><p>At the same time, he says internal auditors must adhere to the fundamental remit of audit, which, for OakNorth, is the CIIA’s Financial Services Code. One of the first sentences of that document says internal audit’s primary role is to help senior management protect the assets of the business — in this case from hacking, data breach, and leakage.</p><p>“That is absolutely the role of internal audit in cloud security,” Garcia-Alvarez says. When businesses are migrating to and operating in the cloud, internal audit needs to provide assurance that the cloud infrastructure is safe, secure, and able to meet the firm’s objectives — not just now, but in the future. “The way to do that is to be embedded as the third line of defense and to provide real-time feedback on risk and controls, and to assure the board that you are mitigating risk with data — not creating new ones.” </p><p>While cybersecurity has long been on auditors’ lists of regular assignments, securing today’s cloud poses fresh challenges. The very structure, speed, and opacity of the cloud demands a focus away from traditional auditing. Having systems in place to deal with data breaches, data loss, and ransomware attacks is mostly standard today, but issues arising from the unique infrastructure of the cloud, the lack of visibility of fourth- and fifth-level suppliers, and the need to work in tandem with both the cloud provider’s own security teams and a wider range of stakeholders across the business are growing challenges for internal auditors dealing with cloud security. </p><h2>Changing Purpose</h2><p>OakNorth’s journey is a good example of how the speed of change impacts internal audit’s security concerns. Like many businesses, OakNorth’s cloud provider in 2016 was Amazon Web Services (AWS). As a large global player, Garcia-Alvarez was happy that AWS could be responsible for the security of the cloud, while OakNorth was responsible for security in the cloud. That theoretically makes it easier for internal audit because the function can regularly check and rely on the up-to-date certifications maintained by the cloud provider. Audit can then focus almost entirely on the internal security control environment. In reality, though, for cloud security to be robust auditors also need to keep up with changing laws, rules, and regulator expectations. </p><p>“Those can change very quickly,” he says. In 2016 when OakNorth migrated to the cloud, the U.K. financial regulator was happy with the decision and with the company’s cloud provider — because it was big, safe, and secure. But when other banks followed suit by 2017, the regulator decided it was a potential concentration risk. If AWS went down, it would take a huge slice of the U.K. financial services sector with it. As a result, OakNorth moved to a multi-cloud solution for all of its client-facing technology.</p><p>From the outset, OakNorth used cloud data centers, provided by AWS, in several locations in Ireland, with an additional fail-safe elsewhere in Europe. “That one is like a bouncy castle,” Garcia-Alvarez says. “The shell is there, but the engine is off. Turn on the engine and it will be fully blown up and working in a matter of hours.” Just to be sure, the IT team rebuilds the core banking platform from scratch at a new location in Europe once a year, with internal audit providing independent assurance over the exercises. “It is time-consuming and expensive, but at least we know that the bank is safe.”</p><h2>Getting in Early</h2><p>Cloud downtime is not a fantasy risk. In February 2017, for instance, AWS services on the U.S. East Coast experienced failure. While reports on technology news site <em>The Register</em> suggested the servers were down only about half an hour, some customers reportedly could not get their data back because of hardware failure. Another outage in March 2018 affected companies such as GitHub, MongoDB, NewVoiceMedia, Slack, and Zillow, according to CNBC.</p><p>James Bone, a lecturer at Columbia University and president of Global Compliance Associates in Lincoln, R.I., says that is just one of many reasons internal auditors should be involved early in any cloud deployment. “I don’t believe that internal auditors should be deciding which products to use, but I do think they should be very much involved in the selection process,” he says. “They need to understand the service model, what is being deployed, and how they are planning to use the services. The platform that they use will determine, to a large part, the risk exposure to the firm.”</p><p>That is because the choice of platform governs what data will be transitioned, if any will stay on the premises, access administration, business continuity plans, data breach response, ransomware strategy and response, the frameworks the service provider uses for cloud security, the frequency of monitoring, contractual agreements, and many other factors. Auditors need to be on top of the situation to raise red flags before security risks crystallize. Bone says, for instance, that he has heard stories of service providers failing during a transition to the cloud, without a backup in place from which to restore the client’s data. In this example, organizations need to know what the recovery plan is and, crucially, who is responsible for it.    </p><h2>Sharing Responsibility</h2><p>“These are shared security and operational relationships between the cloud provider and the business,” Bone says. “So it is about clearly separating the different lines of accountability and responsibility at an early stage.” That includes sharing operational performance metrics and having clear escalation processes for data breaches, outages, and other security issues where the responsibilities are set out clearly between the cloud provider and the business. The internal audit team must have a realistic understanding of its own and the business’s capabilities if those measures are to be effective. “If the firm and the audit team are not particularly agile, can they use the vendor to take up some of that role?” he asks. </p><p>The opaque nature of what goes on in the cloud service provider’s business is a particular worry for internal auditors. “The biggest problem in these virtual environments is that the distance between control and assurance gets wider,” he says. Bone has been researching this idea for about four years. In digital environments, he says, risk and audit professionals have been used to testing applications because in most cases the physical hardware and data are available to see, touch, and analyze. </p><p>“As we move to a boundaryless environment, we are creating a distance between our ability to recognize a problem and having to rely on others to tell us there is a problem,” he says. “That distance impacts response time, and our ability to develop and put in place even more robust controls, because we are further away from the problem. This is an underappreciated risk and is getting larger because firms that are providing these services are getting better at managing their own risk, while as businesses go further into the cloud and have multiple cloud providers, they are becoming more removed from core processes.”</p><h2>Potential Headaches</h2><p>For Fred Brown, head of the critical asset management protection program at HP in Houston and former head of IT audit at the firm, dealing with cloud security while working with such shared services can create “rather large challenges.” </p><p>“The more you open your environment, the more you have to stay on top of security,” he says. Over the last couple of years, HP has been working toward being a top quartile security organization, he explains. And Brown’s cyber team has grown 70% during that time. The business has been aggressively moving to cloud services — including infrastructure as a service, platform as a service, and software as a service. Implementing a 100% review of all suppliers that would include all cloud instances throughout the business means doing a detailed security check of more than 2,000 suppliers across the enterprise. </p><p>To speed up the process, HP has contracted with a third-party assessment exchange, CyberGRX, which describes itself as supplying “risk-assessment-as-a-service.” Any subscriber can have a supplier risk assessed — once the results are in, users can view them via an exchange. The process is integrated into HP’s inherent risk-scoring program, so that all vendors except those with the highest inherent risk score are assessed by CyberGRX. The vendors with the highest inherent risk are risk assessed by internal resources. This process represents a new initiative at HP, and so far it has produced useful reports and helped the company tackle a backlog of risk assessments.</p><p>“This is removing an entire blind spot when it comes to risk,” Brown says. “Even if you have 100 suppliers who you haven’t assessed, with many connected to your company’s critical assets, whether it is employee data, or something else — if you haven’t assessed them, you have no idea what their risk profile really looks like.”</p><p>Brown says one problem is that whether a cloud-based supplier is AWS or a small online education provider, if it is managing critical data, the threat to the business is the same. With many cloud providers now outsourcing parts of their own operations, HP is putting in extra effort on fourth- and fifth-party risk management. That is why having someone track the cloud supplier landscape is critical to managing security risk, he says, enabling the organization to identify what is going on and maintain control over the process. This challenge is amplified in a company such as HP that was already complex when it began outsourcing to cloud service providers.</p><h2>Working Across the Business</h2><p>New suppliers need to have up-to-date and formal self-attestation certificates that follow recognized standards, such as Service Organization Controls 2 reports and adhering to the International Organization for Standardization’s ISO 27001. To make sure a business division or manager does not randomly contract with a new cloud provider, Brown’s team has what he calls a “cast-iron interlock” with procurement. Procurement knows what HP’s cloud security requirements are, and they must be included in any new contractual arrangements. In fact, Brown describes the contracts as “living,” because they point to the security requirements, which HP can update without changing the actual contract itself.</p><p>Working with AWS, HP has created a way of centralizing group security policies through the IT infrastructure. The main cloud instance has all of the group policies established — any new instance sits beneath this “parent” and effectively inherits its security policies automatically. “Every time you make a change to the group policy, it cascades to all the instances that are underneath that,” Brown explains. Non-AWS cloud instances go through the new procurement system as described earlier.</p><p>As cloud computing becomes synonymous with organizations’ IT infrastructures, internal auditors need to work more collaboratively and strategically, according to Scott Shinners, partner of Risk Advisory Services at RSM in Chicago. That will mean audit working increasingly not just with IT and IT security, but with procurement, legal, risk management, and the board.</p><p>“The audit committee has to see cloud security in the audit plan, and it also has to be present in the nature of the additional conversations you’re having with management,” he says. “It should come up not just after implementation, but before in strategy setting and so on.” Moreover, if internal audit discovers cloud instances in parts of the business that are not meant to have them, it can feed back to IT and risk management.</p><p>Internal audit also needs to work closely with the audit committee as cloud migration, almost inevitably, leads to abandoning a large percentage of the audit plan. “That is where the really good engagement with the audit committee comes through,” Shinners says. “How willing is the audit committee to support a trade-off to reduce assurance on moderate risk areas in order to have internal audit spend more of its resources on some of the cutting-edge stuff that is emerging?”</p><p>Performing third-party, independent assessments of cloud security and thinking about the underlying controls on data security, access management, breach response plans, and so on, is just the minimum internal audit can do, he says, because that only provides a snapshot in time in a fast-moving area. “The No. 1 way that internal audit can be successful is working with the second line of defense to build a culture around data protection that is pervasive enough to be successful in an environment that is so fast moving,” he says. “Making sure risk management gets feedback to know the culture is working is right up internal auditors’ alley.”</p><h2>Skills and Expertise</h2><p>CAEs may also need to reach outside of their organizations to secure audit staff with the right level of skills and qualifications, says Ruth Doreen Mutebe, head of Internal Audit at Umeme, Uganda’s largest electricity distributor. She recommends building partnerships with technology and information security institutes, such as ISACA, and universities to help identify good candidates.</p><p>“Cloud auditing involves rare skill that takes time to build,” she says, especially because it requires people with a good grasp of technical issues who can also communicate those concepts at a basic level to management. In addition to attracting and training staff, a CAE has to be able to retain them after that initial investment has been made.</p><p>Mutebe’s approach is to recruit a competent IT security auditor — even if a premium price has to be paid — who can effectively audit and guide management on aspects of cloud security. In addition, she encourages her technical staff members to pass on their knowledge to the entire audit team.</p><p>“That could include embedding cloud security procedures into what would have been non-IT audits to build capacity and where resources allow, attaching nontechnical internal auditors to support basic tests on cloud security audits,” she says. Where gaps remain, outsourcing and co-sourcing arrangements with clearly established service level agreements can be used. “Even there, CAEs should encourage the outsourced service provider to train the internal audit staff,” she says.</p><h2>Keeping Up With Change</h2><p>Cloud security is moving at a rapid pace, much like other technological changes in businesses today. For internal auditors, that means a focus on critical thinking, learning how to stay current in their industries, and developing a willingness to team up across the business and beyond to form effective alliances. While such an open approach to providing assurance may be new to many auditors working in more traditional environments, it is likely to be a crucial step to take if organizations are to deal with the growing complexity of their cloud initiatives. </p>Arthur Piper1
The Ever-expanding Cloudhttps://iaonline.theiia.org/2019/Pages/The-Ever-Expanding-Cloud.aspxThe Ever-expanding Cloud<p>​Do internal auditors know what's in their organization's cloud? There's probably more to it than they or their IT security colleagues realize, according to volume one of Symantec Corp.'s <a href="https://resource.elq.symantec.com/LP=7326" target="_blank">2019 Cloud Security Threat Report</a>.</p><p>Dependence on the cloud is growing, the report notes. More than half of the 1,250 security decision-makers who responded to the global survey say their organizations have moved their computing workload to the cloud. And 93% say their organizations store data in multiple environments, distributed relatively evenly among private cloud, public cloud, on-premises, and hybrid cloud setups. </p><p>That complexity is making it hard for organizations to keep track of how much data they are storing in the cloud — and where. On average, respondents say their organizations' employees are using 452 cloud applications. However, Symantec estimates that organizations actually have an average of 1,807 shadow IT apps.</p><p>And if organizations can't see their cloud apps and data, they can't secure them. More than half of respondents say their organization's cloud security practices aren't keeping pace with the proliferation of cloud apps. </p><p>"The [security] gap created by cloud computing poses a greater risk than we realize, given the troves of sensitive and business-critical data stored in the cloud," says Nico Popp, senior vice president, Cloud and Information Protection, at Mountain View, Calif.-based Symantec. </p><h2>Security Challenged</h2><p>Popp says the cloud, itself, isn't increasing the problem with data breaches. A bigger problem is immature security practices, which nearly three-fourths of respondents blame for at least one cloud security incident in their organizations. More than 80% say their organizations lack processes to respond to cloud security incidents successfully. Just one in 10 say their organizations can analyze cloud traffic effectively. </p><p>Cloud security is a capacity problem, as well. More than 90% say their IT security teams can't keep up with all the cloud workloads in their organizations. Most don't have the cloud security manpower to deal with all alerts — organizations respond to just one-fourth of alerts, respondents say. In addition, 93% say their organizations need to enhance cloud security skills.</p><p>The report notes a third culprit: risky employee behavior such as using personal accounts and having weak passwords. This behavior sets the stage for attacks using "camouflaged" files or aimed at taking over user accounts. Another behavior problem is oversharing of data. Respondents estimate that one-third of files in the cloud shouldn't be there. </p><h2>Threat Watch</h2><p>The Symantec report lists several threats to cloud systems, including a recent trend of cross-cloud and malware injection attacks. Still, unauthorized access accounts for nearly two-thirds of cloud security incidents. "Digging deeper, companies are underestimating the scale and complexity of cloud attacks," the report notes. </p><p>For example, only 7% of respondents say account takeover is among their biggest cloud risks, yet Symantec says its data reveals that 42% of risky behavior can be attributed to a compromised cloud account.</p><p>Respondents say they know criminals are taking advantage. Nearly 70% say they have found evidence that their organization's data has been for sale on the Dark Web. </p><h2>Clearing Skies</h2><p>Despite looming threats, organizations can act to ensure a better forecast for their cloud operations. These actions include:</p><ul><li>Developing a cloud governance strategy to enforce security policies across on-premises and cloud environments.</li><li>Adopting a "zero-trust" model that protects all data and implements controls at all points of access.</li><li>Promoting shared responsibility encompassing not only the cloud provider and IT security department, but also executives and all employees.</li><li>Leveraging automation and artificial intelligence to analyze potential threats and respond to incidents.</li><li>Moving to a DevSecOps approach in which security practices are embedded into all application development. </li></ul><p><br></p><p>With cloud reliance expanding and business processes becoming digitized, organizations "need to re-evaluate their actual versus perceived risks," the report advises. To address these risks, the report recommends complementing technology solutions by adopting security best practices "at the human level" to confront cloud threats.</p><p><em>To learn more, read </em>Internal Auditor<em>'s August issue cover story, <a href="/2019/Pages/Security-in-the-Cloud.aspx">"Security in the Cloud."</a></em></p>Tim McCollum0
Peace in Our Timehttps://iaonline.theiia.org/2019/Pages/Peace-in-Our-Time.aspxPeace in Our Time<p>Too many organizations use internal audit results to drive priorities for the IT function, which can have a devastating effect on morale. This approach sets an example for the entire organization about how to get systems-related objectives met. Initially, this can be benign as leaders try to do the right thing and help uncover systems issues that need attention. Eventually, pointing the auditors to real or suspected issues allows them to elevate any project to the highest priority, whether it is strategic or not.</p><p>For example, a software company starved back-office systems in favor of product development. As a result, IT fell seriously behind in patching internal production systems. Because the organization was audit-driven, at the next opportunity, management pointed auditors at patching, and the inevitable findings in patch management became the flag around which any desired project was wrapped to secure new funding. Step one: Hold IT accountable for not patching that system. Step two: Secure funding to “fix IT’s mess.”</p><p>Allowing audits to drive strategy wastes time and money, and robs management of the audit’s real value — helping management validate that it is appropriately addressing risks to business processes. When the audit becomes the key objective, performing audits becomes an essential business process on its own. This mistake creates the potential for a wildly inappropriate scope that gives the IT staff the sense that audits are never-ending and self-serving. </p><h2>Fear and Loathing</h2><p>These issues can lead to audit fatigue and poorly executed audit activities. Before long, management is spending its time and attention fixing problems with audits instead of fixing problems found by audits.</p><p>In another example, a large financial services company purchased a much smaller company in an adjacent but highly regulated space. As is often the case, the smaller company had a much lower profile than the larger company, but that changed once it was part of a larger organization. The new management, lacking experience as a highly regulated entity, began to ramp up audits to get ahead of the regulators. As operational requirements competed with audit requests, “just get it done” replaced “do it right.” At some point in this dysfunctional downward spiral, “do whatever the auditor says to get this over with” became the strategy to end the pain. </p><p>This example provides context for the skepticism, distrust, and outright fear senior executives and IT staff members have about audits. Some worry about getting in trouble for doing something wrong. Many view the time spent on audit requests as wasted time or busy work. The fear and distrust for audits is naturally extended to the auditors, and this leads to an “us versus them” mentality. Both sides dig in and spend more time protecting their flank than solving their problems. </p><p>Some IT departments assign auditors “handlers” to choreograph activity, coach process owners to provide guarded answers, and quickly escalate issues, causing a bottleneck within leadership. Inexperienced auditors bring poor time management skills, poorly thought-out evidence requests, and negative attitudes to audits that put everyone on guard. Auditors then spend extra time gathering overwhelming evidence of control failure, and IT staff fabricates control evidence.</p><p>In addition to driving poor decision-making when used unwisely, audits often veer off track. In such cases, people too close to the situation sometimes focus on the audit as the key objective rather than managing the business process under audit. Besides these strategic mistakes, scope creep, poor communication, distrust among teams, and inexperience can plague any project and amplify any problems with an audit because of the extra scrutiny on the outcome. </p><p>In some organizations, IT may be severely underfunded and so far behind in resolving previous audit findings that the department gets accustomed to adding the next set to its ever-expanding project list. This forces leadership to spend so much time prioritizing and re-prioritizing work that audit failure becomes the de facto driver for funding. This, more than control failures, may be the finding that the audit should reveal.</p><h2>The Path to Peace</h2><p>It doesn’t have to be like this. When used appropriately to validate assumptions and uncover blind spots, the audit program is a crucial asset for management and plays an essential role in governance. Here are 10 tips to help internal auditors, management, and IT employees get on the right track.</p><p><strong>Audit team</strong> The audit team can become better partners to IT by taking these steps:<br></p><ul><li><em>Agree with senior leadership on the strategy and priorities of the audit program.</em> Establish priorities and understand where to focus audits based on the risks presented by the critical business processes.</li><li><em>Ensure each audit focuses on making the business process better, not finding problems</em>. Internal audit should keep this goal in mind as it sets audit objectives, determines scope, and frames findings. Always solicit recommendations for improvement from management. </li><li><em>Help the organization navigate audits and examinations by external organizations (within the limits of independence).</em> This is particularly important as it pertains to audit scope. For example, it’s not helpful to have nonregulated businesses examined by regulators. It wastes time and exposes the organization to inappropriate jeopardy. Auditors should make sure all parties agree to the scope before the audit starts. </li><li><em>Agree up front on the criteria for identifying the required evidence. </em>These criteria include sample selection criteria, the duration of the assessment, and the amount of evidence required to validate each test objective.</li><li><em>Agree on the process and tools to be used for requesting and receiving the evidence. </em>Agree on how quickly evidence is to be gathered once requested.<br><br></li></ul><p><strong>Management</strong> IT management can demonstrate transparency and respect for the audit process by:</p><ul><li><em>Avoiding assigning junior people to handle examiners or auditors.</em> When management tries to offload audit responsibility to the least useful resource, it almost always has a negative impact.</li><li><em>Not coaching employees on how to be coy with auditors. </em>Internal auditors are trained to spot inconsistency and lack of transparency. Trying to hide details from auditors is unprofessional and causes them to dig deeper in that area.<br><br></li></ul><p><strong>Employees</strong> IT staff members who are asked to support audit activities can establish trust by taking these steps:</p><ul><li><em>Don’t assume your competence is being questioned.</em> “I don’t know, but let me find out for you” is a better answer than guessing.</li><li><em>Don’t try to sound like a lawyer. </em>The best way to be understood is for employees to use the language and style that is comfortable to them. The surest way to get management’s attention — and not in a good way — is to call a minor testing deviation a “material weakness.”</li><li><em>The auditor is not a whistleblower hotline. </em>Managers should remind employees to bring internal issues to their manager or a neutral member of the management team.</li></ul><h2>Look in the Mirror<br></h2><p>Internal auditors should ensure their organization doesn’t take a dysfunctional audit approach. They should review their audit strategy to make sure it addresses business process risk, provides the necessary governance assistance to management and the board, and addresses the organization’s regulatory requirements. They shouldn’t let audits drive the business. <br></p>Bill Bonney1
The Threat Huntershttps://iaonline.theiia.org/2019/Pages/The-Threat-Hunters.aspxThe Threat Hunters<p>​They're on the hunt, in companies around the world. Combining technology tools with detective skills, they are hunting for hidden adversaries on their networks. And their numbers are growing.</p><p>More than four in ten organizations responding to the <a href="https://www.domaintools.com/content/sans_threat_hunting_2018_survey_report.pdf" target="_blank">SANS 2018 Threat Hunting Survey</a> (PDF) say they conduct continuous threat hunts, up from 35% in information security training firm SANS Institute's 2017 study. More than one-third commence such hunts to look for underlying problems in response to a security event.</p><p>Their aim is to root out intruders, who can dwell on a network for an average of more than 90 days before they are detected. "Most of the organizations that are hunting tend to be larger enterprises or those that have been heavily targeted in the past," according to co-authors Robert M. Lee, a SANS instructor, and Rob Lee, curriculum lead at the institute. SANS surveyed 600 organizations for the report.</p><p>Threat hunting goes well beyond the intrusion detection most organizations rely on to discover security breaches. The SANS report defines it as an iterative approach for searching for and identifying adversaries on an organization's network. It's about combining threat intelligence and hypothesis generation to hone in on the most likely locations that intruders will target. </p><p>Threat hunting can be effective, the report notes. For example, 21% found four to 10 threats during threat hunts. Nearly 17% found as many as 50 such threats.</p><h2>Intelligence Is Key</h2><p>One reason for threat hunting's effectiveness is that hunters are harnessing better threat intelligence, the report finds. Most respondents (58%) say they rely on intelligence generated internally based on previous incidents. Moreover, 70% tap into intelligence from third-party sources such as anti-virus signatures.</p><p>"Nothing is more valuable than correctly self-generated intelligence to feed hunting operations," the authors say. However, organizations without such capabilities may need to turn to third parties. In fact, they recommend blending the two forms of intelligence as a way to reduce adversary dwell times.</p><h2>People and Technology</h2><p>Still, respondents depend most on alerts from network monitoring tools for their threat intelligence, which the authors point out isn't really threat hunting — a common misconception. This reliance on sensors may indicate that organizations still see threat hunting as a technology solution. The survey results bear this out, with more than 40% prioritizing technology investments for threat hunts versus 30% for qualified personnel. </p><p>The emphasis on technology is misplaced, the authors say. Yes, threat hunters depend on automation to do things faster, more accurately, and at greater scale. "However, by its definition, hunting is best suited for finding the threats that surpass what automation alone can uncover," they stress. Instead, technology and people must be intertwined.</p><p>The authors recommend that organizations prioritize recruiting and training skilled staff for threat hunts. In particular, they say such professionals are more likely to detect threats and create tools they will need to be effective. </p><p>Respondents say the baseline skills for threat hunters are network, endpoint, threat intelligence, and analytics. More advanced capabilities include digital forensics and incident response.</p><h2>Hunting Tools</h2><p>Hunters need weapons, and this is where technology tools come into use. Nine out of 10 respondents say their threat hunters use the organization's existing IT infrastructure tools, while 62% have developed customized tools. </p><p>However, the authors question whether these tools are providing the view of the network needed for successful hunts, noting that they often are detection-based. Such tools may not find all the intruders who have breached the network, they say.</p><p>Whatever their tools, the report notes that threat hunting can be resource-intensive and requires an emphasis on analysis and developing hypotheses about adversaries. Although growing percentages of respondents are basing hunts on continuous monitoring or incident response, it may be more effective to conduct scheduled hunts. "Even a few hunts per year, when done correctly, can be highly effective for the organization," the authors say.<br></p>Tim McCollum0

  • GEICO_September 2019_Premium 1__
  • Chartered Prof Acct Canada_Sept2019_Preimum 2
  • IIA CERT CIA_September 2019_Premium 3