Privacy Law Puts California Consumers in Control Law Puts California Consumers in Control<p>​Maybe you've seen the "don't sell my data" buttons popping up on websites lately. If you live in California, you may have noticed similar signs in retail stores. They are harbingers of businesses scrambling to comply with California's new data privacy law.</p><p>The California Consumer Privacy Act (CCPA) went into effect on Jan. 1, and already it's become a mad rush. The state will start enforcing the law on July 1, but there are no rules yet. And initial compliance costs could top $55 billion, according to an economic assessment compiled for California's attorney general by Berkeley Economic Advising and Research LLC (see "CCPA and Data Privacy Resources" below right).</p><p>The CCPA is a response to a litany of data privacy breaches and concerns over how Facebook, Google, and online marketers are compiling, using, and selling consumer data. In a recent <a href="" target="_blank">Pew Research Center study</a>, 81% of respondents say they have no control over the personal data companies collect on them.</p><p>The CCPA is about giving consumers that control. Under the law, California residents have the right to:</p><ul><li>Know how organizations use their data.</li><li>Request that their data be deleted.</li><li>Opt out of having their data collected, shared, and sold.</li></ul><p> <br> </p><p>"Americans should not have to give up their digital privacy to live and thrive in this digital age," California Attorney General Xavier Becerra said in October at <a href="" target="_blank">a press conference</a> announcing draft regulations for the CCPA.</p><h2>Doing Business With California Residents</h2><p>The CCPA follows on the European Union's (EU's) General Data Privacy Regulation (GDPR), in effect since May 2018. Just as GDPR covers all EU residents, the CCPA applies to any organization that does business with California residents, even if the organization is located out of state. Organizations are subject to the law if they meet one of three conditions:</p><ul><li>Generate more than $25 million in annual revenue.</li><li>Buy, sell, or share the personal information of 50,000 or more California consumers, households, or devices.</li><li>Derive at least half of their revenue from selling consumers' personal information.</li></ul><p> <br> </p><p>Although GDPR and the CCPA are similar, one area of difference is penalties. Under GDPR, regulators can fine organizations up to 4% of annual revenue for data privacy violations. With the CCPA, fines are $2,500 per nonintentional violation and $7,500 per intentional violation. </p><p>Because each person affected counts as a violation, those amounts can multiply quickly when hundreds of thousands of California residents' data may be involved. Further, the CCPA allows individuals to sue for damages if their data is disclosed.</p><h2>Data Collectors Are Most at Risk</h2><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p> <strong>CCPA and Data Privacy Resources</strong> </p><p><em>CCPA</em><br></p><p>California Attorney General's Office, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">Standardized Regulatory Assessment: California Consumer Privacy Act of 2018 Regulations</span></a> (PDF). </p><p>California Attorney General's Office, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">California Consumer Privacy Act Regulations: Proposed Text of Regulations</span></a> (PDF).</p><p>BakerHostetler LLP and Practical Law, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">CCPA and GDPR Comparison Chart</span></a> (PDF).</p><p>International Association of Privacy Professionals, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">U.S. State Comprehensive Privacy Law Comparison</span></a>.</p><p>TrustArc, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">Essential Guide to the CCPA</span></a> (PDF).</p><p><em>Data Privacy</em><br></p><p><em>IIA Bulleti</em>n, <a href="" target="_blank"><span class="ms-rteThemeForeColor-1-0">International Data Privacy Day</span></a> (PDF).<br></p><p>U.S. National Institute of Standards and Technology, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management</span></a> (PDF). </p></td></tr></tbody></table><p>Organizations most likely to be impacted by the CCPA are those that collect and sell massive amounts of consumer data. At the top of that list are the big digital marketing and advertising companies. </p><p>Because consumers have to opt out of such collection under the CCPA, the law may not impact these companies' practices as much as they were by GDPR, according to Lauren Fisher, principal analyst at eMarketer in New York. That's because GDPR required consumers to opt in to data collection. "Marketers failing to uphold practices that make consumers feel comfortable with sharing data are likely to feel the effects," she explained in a <a href="" target="_blank">July 2019 eMarketer article</a>.</p><p>But it's not just the big marketers. Any company with lots of data on consumers — big companies, internet companies, and online retailers especially — is at risk. And the more consumer records they have, the bigger the risk, says Chris Babel, CEO of San Francisco-based TrustArc, which provides data privacy compliance technology. </p><p>Babel says many large global companies have to comply with GDPR, so they've had a head start on compliance, despite the differences in the two laws. But many big companies with lots of consumer data weren't impacted by GDPR because they don't do business outside the U.S. Take utility companies with their huge customer bases, for example. "They don't have more risks, but they have less time," to prepare for CCPA compliance, Babel says.</p><h2>Viewing Data From a Privacy Perspective</h2><p>The CCPA "requires businesses to fundamentally understand their data on a different level than they've ever had to before," Babel says. Typically, businesses have looked at data from a security standpoint, he explains. Their focus is on the point where the data is collected, whether it's encrypted, and where it's stored. </p><p>Babel says organizations need to look at data from a privacy perspective that considers what the data includes, how it is used, and where it flows — both within and beyond the business. That's far more complicated.</p><p>For starters, different businesses store data in different ways. One company might have lots of data but store it in a single database. Another company could have fewer records but spread them across hundreds of databases, Babel explains.</p><p>The next concern is what happens when a consumer requests to see his or her data, or asks the business to delete or stop selling it. According to the draft rules, organizations have 45 days to comply with such requests. During that time, the business must validate that the person is who he or she claims to be, locate the person's data, and comply with the request.</p><p>But that's just the data that resides within the organization. Babel says the CCPA presents substantial vendor management consequences because organizations are responsible for all the data they sell or share with other businesses. That means an organization responding to a consumer request also must contact any other organization with which it shared or sold that information so they can comply, as well.</p><p>"When you start peeling that back, layer by layer, it gets more complicated than most companies think," Babel says.</p><h2>The Drumbeat of Regulation</h2><p>But peel back the layers they must, because the drumbeat for consumer privacy protection doesn't stop with California. A similar law went into effect in Nevada in October 2019. Ten other U.S. states are currently considering consumer data privacy laws, according to the International Association of Privacy Professionals.</p><p>California's law isn't finished rolling out yet. In addition to finalizing new rules — the public comments period ended in December — there are business-to-business and employee data aspects that take effect in January 2021.</p><p>And just because California's rules aren't final, it doesn't mean organizations are off the hook. Attorney General Becerra <a href="" target="_blank">told Reuters</a> this month he will make an example of businesses that don't make efforts to comply, "to show that if you don't do it the right way, this is what is going to happen to you." </p>Tim McCollum0
The Hidden Risks of the Cloud Hidden Risks of the Cloud<p>Most large organizations are using Microsoft’s Azure cloud computing services in one form or another. Indeed, Microsoft claims more than 95% of Fortune 500 companies use Azure. Among other things, Azure supports data analytics, data warehousing, DevOps, storage, virtual desktops, and fully managed infrastructures. Additionally, organizations can integrate the services within Azure into a corporate network in the same way traditional data centers are connected. </p><p>Yet, despite Azure’s pervasiveness, many organizations don’t fully understand the effects the platform may have on daily operations and personnel, or the potential security implications. Azure’s services can introduce security and data privacy risks such as inappropriate administrative access, less clarity on role-based access permissions, or inappropriate remote access. For instance, in May 2019, Azure suffered a global outage caused by a domain name system configuration issue, according to <a href="" rel="nofollow"></a>, which covers cloud technology.</p><p>Internal audit can assist the organization in identifying the risks introduced with cloud computing. Partnering with the organization’s business units, understanding the technologies, and providing a systematic approach can help to remedy those risks. </p><h2>First Steps</h2><p>When auditing Azure, internal auditors should begin by obtaining an inventory of all Azure services in use by the organization. If an inventory does not exist, internal audit can help build one. Auditors can use native reports within Azure or custom scripts to export inventory data from the system.</p><p>Next, auditors should understand how these services are implemented, as well as IT’s control environment or processes related to cloud services. Are there documented procedures for administering the environment? Is formal change management used in all aspects of the cloud such as networking, storage, maintenance, and provisioning? </p><p>For example, with database platform as a service, auditors should understand the database platforms and how they are configured and secured. The organization may set up its own servers in an Azure virtual environment or use Microsoft’s Azure SQL server. Each method poses unique audit considerations that need to be investigated. </p><p>A third step is performing a risk analysis to determine the risks associated with each of the services and their pervasiveness. Auditors should be aware of how moving these services out of traditional data centers impacts connectivity, communication requirements, separation of duties, latency, response time, administrative security, and compliance. Whenever possible, auditors should partner with IT to monitor key performance indicators based on risk to assist with ongoing control monitoring and operations. </p><h2>A Plan for the Cloud </h2><p>Once internal auditors have completed these three steps, they are ready to build their audit plan. In doing so, auditors need to address several aspects of the Azure platform.<br></p><p><strong>Azure Security Center</strong> Internal audit, IT, or management can quickly identify the organization’s Secure Score — which measures its security posture — through the Azure Security Center. The center provides security recommendations based on the organization’s current configurations and monitors system updates, vulnerabilities, network security, and other areas. </p><p>In addition, Security Center prioritizes recommendations, so auditors know where to start with their assessment. The dashboard groups the organization’s security hygiene into categories such as compute and apps, networking, data and storage, identity and access, and security solutions. Auditors should note that the dashboard and associated recommendations are alerts rather than enforced security configurations. <br></p><p><strong>Networking and Virtual Machines</strong> Cloud environments can be complex with virtual networking, firewalls, and machines configured from a browser or Microsoft’s Azure PowerShell scripting language. Azure administration can be performed via a web browser, and workloads can be administered remotely using many other secure and insecure methods. </p><p>Internal audit can help the organization take a strategic approach to risk by validating that remote access to the environment is restricted appropriately and Azure access is secured with multifactor authentication. Simple passwords can be stolen, compromised, or “brute-force” attacked. Once one mach-ine is compromised, it can be used to compromise other Azure resources or attack other networked devices. Multifactor authentication goes beyond passwords by requiring more than one method of authorization for access. In addition to multifactor authorization, all administrative workload access from the internet should be configured for just-in-time security access, which builds secure connections over the internet. <br></p><p><strong>Azure Active Directory</strong> With more than one billion user identities hosted, Azure Active Directory is one of the most pervasive organizational risks for businesses using the platform. Services such as SQL databases, data warehouses, and virtual machines all leverage Azure Active Directory, as do Office applications. </p><p>Depending on how the organization has implemented Azure Active Directory, it can pose significant administrative access risks. Traditionally, when reviewing administrators for on-premises Active Directory, auditors will evaluate enterprise administrators and domain administrators. However, with Azure Active Directory, there are potentially global administrative accounts. These global accounts could create an account with elevated permissions on the organization’s domain. Moreover, they are unlikely to appear in any traditional audit script outputs. </p><p>On top of this, in Azure, administrators can create custom groups that have less visibility in the environment. Auditors need to fully understand the risk and compliance implications of these custom groups.<br></p><p>Database Services Depending on how the organization stores its databases within Azure, it may have access to database security features such as logging, log retention, data encryption, and restricted elevated access. Auditors should understand which features are in place and how they are monitored.</p><h2>Security Assurance</h2><p>In addition to the security concerns in the previous section, internal auditors should review areas such as data loss prevention, data classification, encryption, and Azure certifications and compliance. Compliance may include the International Organization for Standardization’s ISO 27001, System and Organization Control (SOC) reports, the U.S. Health Insurance Portability and Accountability Act, and Payment Card Industry Data Security Standard. </p><p>Because these services are complex, internal audit could perform smaller audits around specific areas one at a time. For example, auditors could separate networking, Azure Active Directory, and Security Center into their own audits and prioritize them based on risk. Auditors can leverage free Azure benchmarks issued by the Center for Internet Security and Azure’s SOC reports when building out audit plans. </p><p>Auditing the Azure environment can be challenging because of the platform’s constantly changing and complex design. Internal audit may need to hire outside expertise to evaluate the design and operation of controls in these environments. But by overcoming these challenges and performing audits, internal audit can provide assurance that cloud operations are secure. <br></p>Kari Zahar1
Bots of Assurance of Assurance<p>As important as it is, internal auditing involves a lot of repetitive work to provide assurance and achieve the department’s objectives. There is supporting evidence to request, data to gather, workpaper templates to create, and controls to test. But imagine if these basic tasks could be automated.<br></p><p>That is the promise of robotic process automation (RPA). Many internal audit functions are looking to RPA to multiply the capacity of their teams. These departments are following the lead of the growing number of organizations that are using robots, or bots, to automate business processes — particularly repetitive and often time-consuming process steps. </p><p>RPA can help streamline processes by making them more efficient and more robust against errors. That may be one reason that 40% of internal auditors reported that their organizations currently use RPA in business operations in a poll taken during The IIA’s 2019 International Conference in Anaheim, Calif.</p><p>Audit functions can catch up with their organizations’ use of RPA by deploying bots as a digital workforce to enhance their assurance capabilities. Moreover, RPA can free internal audit’s experts from the drudgery of repetitive activities to focus on critical thinking tasks and managing exceptions.</p><h2>What’s in a Bot?</h2><p>RPA involves software that autonomously executes a predefined chain of steps in digital systems, under human management. Common capabilities of bots include filling in forms, making calculations, reading and writing to databases, gathering data from web browsers, and connecting to automated programming interfaces. They also can apply different logical rules such as “if, then, else” or “do while.” And those bots don’t sleep, tire, forget, complain, or quit.</p><p>With RPA, bots improve over time as people specify the underlying rules, but they cannot learn on their own. Conversely, cognitive automation learns and improves its own algorithms over time based on the given data and experience. </p><p>RPA solutions can deliver benefits such as:</p><ul><li>Increased efficiency, especially in situations that once involved repetitive and recurring manual work processes.</li><li>Increased effectiveness and robustness of processes that previously were prone to high error rates.</li></ul><p><br>Organizations are most likely to realize these benefits when they use structured data, which provides the predefined instructions bots need to handle work scenarios. </p><h2>Five Types of Uses</h2><p>Internal audit departments may be slower than their organizations, as a whole, to deploy RPA, but there are many ways they can put the technology to use. Although these applications may differ, depending on each department’s circumstances and capabilities, they can be classified into five categories.<br></p><p><strong>Support</strong> This category of applications enables internal auditors to perform or document an audit procedure such as creating workpaper templates. One example of a support application is a bot that downloads attachments. Internal auditors spend a lot of time pulling supporting evidence from electronic sources or waiting for audit clients to do so manually. In a typical enterprise resource planning (ERP) system, auditors may need to take as many as 10 steps to access an electronic attachment. These steps include opening the ERP browser, typing the transaction code, entering the document number and company code, adding the fiscal year, going to the attachments, choosing the correct file path, and entering a file name that complies with a predefined structure.</p><table cellspacing="0" width="100%" class="ms-rteTable-default"><tbody><tr><td class="ms-rteTable-default" style="width:100%;">​<strong>Bot Programming</strong><style> p.p1 { line-height:12.0px; font:42.5px 'Interstate Light'; } p.p2 { line-height:12.0px; font:9.0px 'Interstate Light'; } p.p3 { text-indent:12.0px; line-height:12.0px; font:9.0px 'Interstate Light'; } span.s1 { letter-spacing:-0.1px; } </style><p><br>When setting up a bot, auditors not only must list the different processing steps, but also state how to get from one step to the next. For example, to access an electronic attachment, from the step where the ERP browser is opened, auditors instruct a bot to type in the transaction code, followed by pressing “enter.” The bot follows the same process as a human user to enter the document number, company code, and fiscal year. Each of the first two entries is followed by pressing “tab.” The third entry is followed by pressing “execute.” </p><p>From there, the bot clicks the attachment button, followed by clicking “Attachment List,” and double-clicking on the attachment file. Auditors specify a predefined valid file path for the bot to follow. Then, they instruct the bot to enter the file name and click “save.” Putting these steps into a loop sequence directs the bot to go through the activities over and over for each document specified in the source listing.<br></p></td></tr></tbody></table><p>A downloading attachment bot supports internal auditors by pulling electronic attachments automatically and more quickly — in less than 10 seconds per transaction. This can accelerate audit procedures related to vendor invoices, for example. In this context, the bot can support auditors in reviewing potential duplicate payments not yet returned, invoice approvals that are not workflow based, and invoice verification as part of a purchase-to-pay process audit. “Bot Programming,” at right, describes how auditors can use rules to set up a bot. <br></p><p><strong>Validation</strong> Bots in this category validate the accuracy or completeness of transactions under review. An example is a distance bot that validates mileage allowances for a full population of business trips, rather than by sampling. To calculate the distance between the starting point and destination manually using a geographical map service would take up to five steps. These steps include opening the web browser, typing in the starting point and destination address, and copying the distance displayed before continuing with the next distance. </p><p>The distance bot supports internal auditors by pulling as-is distances from the system automatically. This bot is good for performing travel expense audits, particularly in organizations with high expenses from mileage allowances.<br></p><p><strong>Control Testing</strong> This category of bots performs all or selected testing steps or attributes for internal controls, especially for IT application controls and IT general controls. Organizations often have a clear picture of the “to be” status of these controls. By translating this clear picture into rule-based procedures, auditors can program bots to test both the design and operating effectiveness of such controls. Bots can quickly identify inappropriate settings organizationwide. For example, within a purchase-to-process audit, bots can test IT application controls such as the duplicate-invoice check and the three-way-match, and prepare standardized audit evidence. <br></p><p><strong>Data Generation</strong> For internal audits requiring access to extended data sets, bots in the data generation category provide access to new data sources such as electronic attachments and temporary data sets. Data extraction bots support upgraded analytics and can reduce false positives by considering new data sources. This capability can reduce follow-up activities for false positives while increasing efficiency. For example, these bots can extract data from PDF text in less than one second and from image files in less than three seconds.<br></p><p><strong>Reporting</strong> Auditors can use bots in this category to create reports or operate follow-up procedures. If internal audit does not use specialty audit software — or plan to introduce it — bots can automate repetitive activities such as report creation based on an audit program and sending follow-up reminders and inquiries.</p><h2>Plan for the Pitfalls</h2><p>The previous examples demonstrate how bots can enable the internal audit function to accomplish results more quickly and without human errors. While the improvements may outweigh the implementation costs, internal audit should be aware of risks across three dimensions: operations, reporting, and compliance. Internal auditors should manage these risks from the beginning and throughout the implementation of RPA. They should start by addressing some common pitfalls.<br></p><p><strong>Disregarding Other Automation Possibilities</strong> Do not automate audit procedures with RPA when other affordable software or more advantageous automation possibilities are available. For example, specialty audit software may be used for reporting and follow-up activities.<br></p><p><strong>Outsourcing Full Bot Programming</strong> RPA bots can be improved over time as auditors specify rule-based procedures to reduce the number of false positives and false negatives. Outsourcing this programming can make internal audit dependent on a third party to establish the logic followed by each bot. Instead, internal audit should obtain advice from external parties, if needed, while keeping most bot programming in-house.<br></p><p><strong>Complying With the RPA Tool’s Terms of Use</strong> Software license terms may prevent internal audit from taking an existing RPA tool used in selected subsidiaries and using it for organizationwide audits. Typically, the license is for the licensee’s (subsidiary’s) direct business purposes — not for all affiliates across the organization. Examine the terms of use carefully.</p><h2>Starting With Bots</h2><p>Knowledge of RPA’s benefits and risks can prepare internal audit to explore the technology’s potential. These tips can help internal audit get started.<br></p><p><strong>Identify Use Cases</strong> Auditors should begin by identifying their department’s recurring activities. Where is time lost because of repetitive activities? Where does the department want to provide higher assurance by increasing sample sizes or extending substantive audit procedures? This identification exercise should be separate from the discussion about how to automate internal audit activities. It also may comprise both full and partial automation.</p><p>Internal audit can use workshops to identify automation opportunities. During these sessions, auditors can use a matrix to prioritize cases based on the potential benefits of automation and the feasibility of doing so. Mapping automation opportunities by end-to-end processes usually doesn’t pay off. Instead, internal audit should map subprocesses or process variants because these are at an actionable level. However, not all subprocesses or variants are an opportunity for automation. </p><p>In addition, internal audit should not create silos between different automation possibilities. When assessing use cases, internal audit should consider RPA as one alternative among many. <br></p><p><strong>Assess the Internal RPA Landscape</strong> Because internal audit is not usually the early adopter for RPA within organizations, the department should identify tools and resources already in use. To realize RPA’s full potential, auditors should assess the various tools on the market. </p><p>Instead of going on its own, internal audit can partner with the organization’s existing RPA users to develop a pilot to demonstrate how RPA can be used in audits. Choosing a use case that allows internal audit to quantify its benefits can support internal discussions and decisions about using RPA.<br></p><p><strong>Motivate the Internal Audit Team</strong> The pilot’s results and the possibilities of learning from RPA are two main drivers for motivating the internal audit team to apply the technology. Demonstrating learning opportunities is easy by using online tutorials, community forums, and free trial versions. These resources can provide online training and enable internal auditors to become familiar with RPA tools. Trial versions, in particular, can show auditors how easy it is to use the tool, which can motivate them to use it.</p><h2>RPA in Alignment</h2><p>In addition to these three tips for getting started, internal audit should create an implementation plan and align RPA with its overall digital labor strategy. This plan should balance an understanding of the technology’s risks with the benefits of target-oriented approaches to implementing it. </p><p>To realize RPA’s benefits in the long run, internal audit should deploy it from a governance perspective. The board’s support can especially enable the chief audit executive to develop a clear plan for automating different internal audit processes. Because other business functions may be using RPA, internal audit needs to align its RPA implantation with these existing activities to generate synergies and avoid duplication of efforts. That understanding can position internal audit to put RPA to use and also drive effective reviews of the organization’s RPA program. <br></p>Justin Pawlowski1
Governments Under Cyber Siege Under Cyber Siege<p>​There's trouble down on the Bayou. Last Friday, city officials in New Orleans acted quickly to try to stave off a cyberattack on city government computers. A public address announcement at City Hall ordered city employees to shut down their computers that morning after phishing emails seeking passwords were discovered, <a href="" target="_blank">the Associated Press reports</a>. So far, the city has not received a ransom demand, but state and federal law enforcement officials are investigating.</p><p>Just last month, the governor of Louisiana declared a state of emergency after a ransomware attack on servers at the state's Office of Motor Vehicles. The state responded by shutting down server traffic to neutralize the attack, <a href="" target="_blank"><em>Business Insider</em> reports</a>. "These protective actions likely saved the state from data loss and weeks of server outages," officials said in a press release.</p><p>The attacks in New Orleans and in the state capital in Baton Rouge are reminders that municipal and state governments are prime targets of cyber criminals. A few days before the New Orleans attack, a ransomware attack compromised city government computers in Pensacola, Fla., impacting government services such as online payments. In the past two years, Atlanta and Baltimore suffered similar attacks that severely harmed city government systems and impeded public services.</p><h2>The Ransomware Threat</h2><p>Ransomware attacks encrypt data on compromised systems and then demand payment to release it. Phishing emails and malware typically are weapons for spreading ransomware. They are among the most common threat types detected by organizations, according to the <a href="" target="_blank">2019 Cybersecurity Report Card</a> from threat-investigation technology company DomainTools. </p><p>Ransomware has targeted companies, governments, hospitals, and other organizations. In some cases, organizations have agreed to pay the ransom, although law enforcement officials and security experts advise against doing so. Forrester Research forecasts that ransomware incidents will increase in 2020, as attackers seek to cash in by targeting consumer devices and "demanding ransom from the [device] manufacturer."</p><h2>Weaponizing Data</h2><p>Attackers are getting more sophisticated, too. Forrester predicts attackers will "weaponize" data and artificial intelligence in the coming year. With companies compiling ever-more data to gain insights, attackers have greater incentive to go after that data, Forrester notes in its <a href="" target="_blank">Predictions 2020 report</a>. Moreover, technologies such as the Internet of Things come with fewer controls, expanding access for attacks. </p><p>"Simply put, there are more attackers with more sophisticated tools aimed at a larger attack surface," Forrester says. "And those attackers want enterprises to pay."</p><p>That financial risk should get the attention of senior executives and boards, as well. That's the focus of a new Committee of Sponsoring Organizations of the Treadway Commission report, <a href="" target="_blank">Managing Cyber Risk in a Digital Age</a> (PDF). The Deloitte-authored report details how organizations can apply the <em>Enterprise Risk Management–Integrating With Strategy and Performance</em> framework to cyber risk.</p><h2>Quick Thinking</h2><p>In New Orleans, city officials decided to shut down systems soon after the city discovered the attack. Officials said the city backs up financial records on a cloud-based system and the city's emergency services were using telephones and radios to operate while systems were down. "We will go back to marker boards. We will go back to paper," Collin Arnold, the city's homeland security director told the Associated Press. </p>Tim McCollum0
Getting to Know Common AI Terms to Know Common AI Terms<p>​Artificial intelligence (AI) systems development and operation involve terms and techniques that may be new to some internal auditors, or that contain meanings or applications that are different from their normal audit usage. Each of the terms below have a long history in the development and execution of AI processes. As such they can promote a common understanding of AI terms that can be applied to <a href="/2019/Pages/Framing-AI-Audits.aspx">auditing these systems</a>. </p><h2>Locked Datasets </h2><p>Datasets are difficult to create because independent judges should review their features and uses, and then validate them for correctness. These judgments drive the system in the training phase of system development, and if the data is not validated, the system may learn based on errors. </p><p>In machine learning systems, datasets are normally "locked," meaning data is not changed to fit the algorithm. Instead, the algorithm is changed based on the system predictions derived from the data. As a safety precaution, data scientists usually are barred from examining the datasets to determine the reasons for such changes. This prevents them from biasing the algorithm given their understanding of the data relationships.  </p><p>Consider a system that reviews the ZIP codes of business accounts. The system may fail to recognize ZIP codes beginning with "0," such as 01001 for Agawam, Mass., or that contain alphanumeric characters such as V6C1H2 for Vancouver, B.C. Locking the dataset prevents the data scientists from inspecting the errors directly. Instead, they would have to investigate why the system is interpreting some accounts differently than others and whether the algorithm contains a defect. Barring data scientists in this way is another form of locking the dataset.  </p><h2>Third-party Judges</h2><p>Because historical datasets are not always verified before AI system use, the internal auditor needs to ensure an appropriate validation process is in place to confirm data integrity. Use of automated systems to judge data integrity may mask AI issues that adversely affect the quality of the output.  </p><p>Therefore, a customary practice in the industry has been to use independent, third-party judges for validation purposes. The judges, however, must have sufficient expertise in the data domain of the system to render valid test results. If they use algorithms as part of their validation process, then those, too, must be validated independently. Usually any inconsistency in the test results during judging is reviewed and reconciled as part of the process. A well-designed validation process will help avoid user acceptance of system outcomes that are inherently flawed.</p><h2>Overfitting and Trimming</h2><p>The data scientist selects datasets to train the AI system that are intended to reflect the actual data domain. Sometimes those datasets reflect ambiguous conditions that should be trimmed or deleted to enhance the probability of error-free results. </p><p>For example, the first name "Pat" can apply to either gender. To avoid system confusion, the data scientist would likely trim it from the training dataset. However, the first name "Tracy," although historically applicable to both male and female, is more commonly a female name. Trimming "Tracy" from the training datasets might bias system outcomes toward males without eliminating much ambiguity when the production data is processed.  </p><p>The problem with trimming is that it can cause data overfit in an algorithm and biased system results during the production phase. Data overfit occurs when the training dataset is trimmed to derive a particular algorithm, rather than the algorithm adjusting itself to a training dataset that represents the actual data domain. The resulting algorithm is not based on a representative data domain. Internal audits should examine process controls over the training dataset to safeguard against data overfit caused by excessive data trimming designed to achieve a desired algorithmic outcome.   </p><h2>Outliers</h2><p>It is important for the data scientist to examine data outliers. For example, a machine learning system may be 90% accurate in correcting misspelled words, but it also may flag numbers as errors and correct them. Those corrections can cause havoc with critical documents, such as financial reports, if the data scientist failed to review system predictions for such outliers.</p><h2>Metrics</h2><p>Performance metrics should be used to assess AI system accuracy (How close are the predictions to the true values?) and precision (How consistent are the outcomes between system iterations?). Such metrics are a best practice, because they indicate performance issues in AI system operations, including:</p><ul><li>False positives: identifying an unacceptable item as correct.</li><li>False negatives: identifying an acceptable item as incorrect.</li><li>Missed items: not addressing all items in the population. </li></ul><p> <br> </p><p>A formal review process to cover these issues improves system performance and helps decrease audit risk.   </p><p>Putting in place accuracy and precision metrics is a best practice when evaluating AI systems. Although these metrics show how well a system finds issues, they do not tell the entire story. In addition, measurements to identify issues missed (false negatives), incorrect identification of issues (false positives), and where an issue exists in the data, but the system failed to detect the issue (missing issues), are needed to measure the full performance of a system.</p><h2>User Interpretation</h2><p>Internal auditors must be careful to safeguard the integrity of the AI audit from user misinterpretations of system outcomes. That is because the system may generate supportable conclusions that are simply misunderstood or ignored. </p><p>For instance, if a system were to predict that jungle fires are related to climate change, this does not confirm that climate change has caused the jungle fires. Earlier this year, news organizations reported that climate change caused fires in the Amazon jungle. However, <a href="" target="_blank">NASA had asserted</a> that the fires were the same as previous years with no change over time and with no relation to global warming. While there might be a correlation between the two, causation should not be inferred from the system prediction.  </p><p>Internal auditors need to take the human factor into account when assessing system quality.  System users may simply refuse to believe or act upon system predictions because of bias, personal preference, or preconceived notions. </p>Dennis Applegate1
Framing AI Audits AI Audits<p>​Artificial intelligence (AI) is transforming business operations in myriad ways, from helping companies set product prices to extending credit based on customer behavior. Although still in its nascent stage, organizations are using AI to rank money-laundering schemes by degree of risk based on the nature of the transaction, according to a July EY analytics article. Others are leveraging AI to predict employee expense abuse based on the expense type and vendors involved. Small wonder that McKinsey & Company estimates that the technology could add $13 trillion per year in economic output worldwide by 2030. </p><p>If AI is not on internal audit's risk assessment radar now, it will be soon. As AI transitions from experimental to operational, organizations will increasingly use it to predict outcomes supporting management decision-making. Internal audit departments will need to provide management assurance that the predicted outcomes are reasonable by assessing AI risks and testing system controls.</p><h2>Evolving Technology</h2><p>AI uses two types of technologies for predictive analytics — static systems and machine learning. Static systems are relatively straightforward to audit, because with each system iteration, the predicted outcome will be consistent based on the datasets processed and the algorithm involved. If an algorithm is designed to add a column of numbers, it remains the same regardless of the number of rows in the column. Internal auditors normally test static systems by comparing the expected result to the actual result. </p><p>By contrast, there is no such thing as an expected result in machine learning systems. Results are based on probability rather than absolute correctness. For example, the results of a Google search that float to the top of the list are those that are most often selected in prior searches, reflecting the most-clicked links but not necessarily the preferred choice. Because the prediction is based on millions of previous searches, the probability is high — though not necessarily certain — that one of those top links is an acceptable choice. </p><p>Unlike static systems, the Google algorithm, itself, may evolve, resulting in potentially different outcomes for the same question when asked at different intervals. In machine learning, the system "learns" what the best prediction should be, and that prediction will be used in the next system iteration to establish a new set of outcome probabilities. The very unpredictability of the system output increases audit risk absent effective controls over the validity of the prediction. For that reason, internal auditors should consider a range of issues, risks, controls, and tests when providing assurance for an AI business system that uses machine learning for its predictions.</p><h2>AI System Development</h2><p><img src="/2019/PublishingImages/Applegate_Three-Phases.jpg" class="ms-rtePosition-2" alt="" style="margin:5px;width:500px;height:251px;" />The proficiency and due professional care standards of the International Professional Practices Framework require internal auditors to understand AI concepts and terms, as well as the phases of development, when planning an AI audit (see "Three Phases of Development," right). Because data fuels these systems, auditors must understand AI approaches to data analysis, including their effect on the system algorithm and its precision in generating outcome probabilities.</p><p> <em>Features</em> define the kinds of data for a system that would generate the best outcome. If the system objective is to flag employee expense reports for review, the features selected would be those that help predict the highest payment risk. These could include the nature of the business expense, vendors and dollar amounts involved, day and time reported, employee position, prior transactions, management authorization, and budget impact. A data scientist with expertise in this business problem would set the confidence level and predictive values and then let the system learn which features best determine the expense reports to flag.</p><p> <em>Labels</em> represent data points that a system would use to name a past outcome. For instance, based on historical data, one of the labels for entertainment expenses might be "New York dinner theater on Saturday night." The system then would know such expenses were incurred for this purpose on that night in the past and would use this data point to predict likely expense reports that might require close review before payment. </p><p> <em>Feature engineering</em> delimits the features selected to a critical few. Rather than provide a correct solution to a given problem, such as which business expense reports contain errors or fraud, machine learning calculates the probability that a given outcome is correct. In this case, the system would calculate which expense reports are likely to contain the highest probability of errors or fraud based on the features selected. The system then would rank the outcomes in descending order of probability. </p><p> <em>Machine learning</em> involves merging selected features and outcome labels from diverse datasets to train a system to generate a model that will predict a relationship between a set of features and a given label. The resulting algorithm and model are then refined in the testing phase using additional datasets. This phase may consider hundreds of features at once to discover which features yield the highest outcome probability based on the assigned labels. </p><p>Feature engineering then deletes the number of system features to enhance the precision of the outcome probabilities. Based on the testing phase, for example, the nature of the expense, the dollar amounts involved, and the level of the employee's position may best indicate high-risk business expense reports requiring close review. During the production phase, as the system calculates the risk of errors and fraud in actual expense reports, it may modify the algorithm based on actual output probabilities to improve the accuracy of future predictions. Doing so would create continuous system learning not seen in static systems. </p><p>In AI system development, it is important for organizations to establish an effective control environment, including accountability for compliance with corporate policies. This environment also should comprise safeguards over user access to proprietary or sensitive data, and performance metrics to measure the quality of the system output and user acceptance of system results. </p><h2>A Risk/Control Audit Framework</h2><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p> <strong>Training Phase</strong></p><p>Considerations for adjusting the assessed level of AI audit risk include:</p><ul><li>If system reviews are in place to evaluate training data modifications, deletions, or trimming, this condition should help prevent overfitting the training dataset to generate a desired result, reducing audit risk.<br></li><li>New AI systems may use datasets of existing systems for reasons of time and cost. Such datasets, however, may contain bias and not include the kinds of data needed to generate the best system outcomes, increasing audit risk.<br> </li><li>AI datasets that consist of numerous data records should contain some errors. In fact, an error-free dataset would indicate a bad dataset, because the occurrence of errors should match the natural rate. For example, if 5% of employee expense reports are filled in incorrectly and are missing key data, then the training dataset should contain a similar frequency. If not, then audit risk increases. ​</li></ul></td></tr></tbody></table><p>Nine procedures frame the audit of an AI system during the training, testing, and production phases of development. The framework provides a point of departure for AI audit planning and execution. Assessed risk drives the controls expected and subsequent internal auditor testing. </p><p>Internal auditors may need to adjust the procedures based on their preliminary survey of the AI system under audit, including a documented understanding of the system development process and an analysis of the relevant system risks and controls. Moreover, as auditors complete and document more of these audits, it may be necessary to adjust the framework.</p><p>Normally, internal auditors adjust their assessment of risk and their resulting audit project plans based on observations made in the preliminary audit survey. The boxes, at right, depict conditions that may alter assessed risk as well as modify expected AI system controls and subsequent audit testing during specific phases of development. </p><p> <strong>Data Bias (Training Phase)</strong> Use of datasets that are not representative of the true population may create bias in the system predictions. Bias risk also can result from failing to provide appropriate examples for the system application.</p><p>A control for data bias is to establish a system review and approval process to ensure there are verifiable datasets and system probabilities that represent the actual data conditions expected over the life of the system. Audit tests of control include ensuring that: </p><p></p><ul><li>Qualified data scientists have judged the datasets.</li><li>The confidence level and predictive values are reasonable given the data domain.</li><li>Overfitting has not biased system predictions. </li></ul><p> <br> <strong>Data Recycling (Training)</strong> This risk can happen when developers recycle the wrong datasets for a new application, or impair the performance or maintenance of existing systems by using those datasets to create or update a new application.</p><p>One control for data bias is independently examining repurposed data for compliance with contractual or other requirements. In addition, organizations can determine whether adjustments in the repurposed data have been made without impacting other applications. </p><p>Examples of control tests are: </p><p></p><ul><li>Evaluating the nature, timing, and extent of the independent examinations.</li><li>Testing the records of other applications for performance or maintenance issues that stem from the mutually shared datasets.</li></ul><p> <br> <strong>Data Origin (Training)</strong> Unauthorized or inappropriately sourced datasets can increase the risk of irrelevant, inaccurate, or incomplete system predictions during the production phase.</p> <p></p><p>To control this risk, the organization should inspect datasets for origin and relevance, as well as compliance with contractual agreements, company protocols, or usage restrictions. The results of these inspections should be documented. </p><p>To test controls, auditors should:</p><p></p><ul><li>Review data source agreements to ensure use of datasets is consistent with contract terms and company policy.</li><li>Examine the quality of the inspection reports, focusing on the propriety of data trimmed from the datasets.<br><br></li></ul><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p> <strong>​</strong><strong>Testing Phase</strong></p><p>Considerations for adjusting the assessed level of AI audit risk include:</p><ul><li>If independent, third-party judges tested the system data, but no process is in place to reconcile differences in test results between judges, then audit risk increases. </li><li>Because system predictions are based on probability, perfect test results are not possible. If third-party judges evaluating the test results find no issues, then data overfit may have occurred, increasing audit risk. </li><li>If the system has not been validated to prevent user misinterpretations caused by incorrect data relationships, such as flagging business expense reports based on employee gender, then audit risk increases. Alternatively, if user interpretations based on system predictions have not been validated to ensure system data supports the interpretation, then audit risk also increases. </li><li>If data scientists fail to use representative datasets with examples involving critical scenarios to train the system, then audit risk increases. </li><li>If the datasets are not locked during testing, then the data scientist may adjust the algorithm to inadvertently process the data in a biased manner, increasing audit risk.</li><li>If the datasets are locked during testing, but the data scientist fails to review the actual system prediction for integrity, then audit risk increases. </li></ul></td></tr></tbody></table><p> <strong>Data Conclusion (Testing Phase)</strong> Inappropriately tested data relationships could result in improper system conclusions that are based on incorrect assumptions about the data. These conclusions could create bias in management decisions.</p><p>The control for this risk is to ensure each feature of the system contains data for which the purpose has been approved for use. Developers should assess the results of such data for misinterpretation and correct it, as appropriate. </p><p>Testing this control involves reviewing user interpretations and subsequent management decisions based on system predictions. By performing this test, organizations can ensure that the data supports the conclusions reached and decisions made by management.</p><p> <strong>Data Overfit (Testing)</strong> With this issue, the risk is that datasets may not reflect the actual data domain. Specifically, data outliers may have been trimmed during system testing, leading to a condition that overfits the algorithm to a biased dataset. That could cause the system to respond poorly during the production phase.</p><p>Organizations can control for this risk by validating datasets in system testing to ensure that the samples used represent all possible scenarios and that the datasets were modified appropriately to obtain the currently desired system outcome.</p><p>To test this control, internal auditors should review all outlier, rejected, or trimmed data to ensure that:</p><p></p><ul><li>Relevant data has not been trimmed from datasets.</li><li>Datasets remain locked throughout testing.</li><li>The algorithm has processed the data in an unbiased way.</li></ul><p> <br> <strong>Data Validation (Testing) </strong>Failure to validate datasets for integrity through automated systems or independent, third-party judges can lead to unsupported management decisions or regulatory violations. An example would be allowing the personal data of European Union (EU) citizens to be accessed outside of the EU in violation of Europe's General Data Protection Regulation.</p><p>Organizations can control for this risk by implementing a validation process that compares datasets to the underlying source data. If the organization uses automated systems, it should ensure the process reveals all underlying issues affecting the quality of the system output. If the organization uses independent, third-party judges, it should ensure the process allows judges the access they need to the raw data inputs and outputs.</p><p>To test these controls, internal auditors should:</p><p></p><ul><li>Assess the process and conditions under which the validation took place, assuring that all high-risk datasets used in the system were validated.</li><li>Confirm randomly selected datasets with underlying source data.</li><li>When datasets are based on current system data, validate such data is correct to avert a flawed assessment of actual system data.</li></ul><p> <br> <strong>Data Processing (Production Phase)</strong> Failing to validate internal systems processing can cause inconsistent, incomplete, or incorrect reporting output and user decisions. However, periodically reviewing and validating input and output data at critical points in the data pipeline can mitigate this risk and ensure processing is in accordance with the system design. </p><p>Auditors can test this control by:</p><ul><li>Reconstructing selected data output from the same data input to validate system outcomes.</li><li>Performing the system operation again.</li><li>Using the results to reassess system risk.</li></ul><p> <br> <strong>Data Performance (Production)</strong> If there is a lack of performance metrics to assess the quality of system output, the organization will fail to detect issues that diminish user acceptance of system results. For example, an AI system could fail to address government tax or environmental regulations over business activity. </p><p>Controlling data performance risk requires organizations to establish metrics to evaluate system performance in both the training and production phases. Such metrics should include the nature and extent of false positives, false negatives, and missed items. In addition, developers should implement a feedback loop for users to report system errors directly, among other performance measures. </p><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p>​<strong>Production Phase</strong></p><p>Considerations for adjusting the assessed level of AI audit risk include:</p><ul><li>Systems that leverage the datasets of existing systems already audited should lower overall audit risk and not require as much audit testing as new systems using datasets not previously audited.<br></li><li>Systems that process inputs and outputs at all stages of the data pipeline should facilitate validation of system-supported user decisions and lower overall audit risk. However, if data inputs and outputs are processed in a black-box environment, confirming internal system operations may not be possible. That would increase the audit risk of drawing the wrong conclusion about the reasonableness of the system output.<br> </li><li>If performance metrics are used to measure the quality of the data output, user acceptance of system results, and system compliance with government regulations, then audit risk decreases.<br></li><li>If performance metrics monitor both system training and production data, then audit risk decreases.<br></li><li>If performance metrics measure system accuracy but not precision, overlooking a possible system performance issue, then audit risk increases.<br></li><li>Well-designed systems prevent unauthorized access to system data based on company protocols and regulatory requirements and routinely monitor access for security breaches, decreasing audit risk. </li></ul></td></tr></tbody></table><p>To test these controls, internal auditors should:</p><p></p><ul><li>Examine reported variances from established performance measures. </li><li>Test a representative sample of performance variances to confirm whether management's follow-up or corrective action was appropriate. </li><li>Determine whether such action has enhanced user acceptance of system results.</li></ul><p> <br> <strong>Data Sensitivity (Production)</strong> With this issue, the risk is unauthorized access to personally identifiable information or other sensitive data that violates regulatory requirements. Controls include ensuring documented procedures are in place that restrict system access to authorized users. Additionally, ongoing monitoring for compliance is needed. Control testing includes:</p><p></p><ul><li>Comparing system access logs to a documented list of authorized users.</li><li>Notifying management about audit exceptions.</li></ul><h2>Algorithmic Accountability</h2><p>As AI technology matures, algorithmic bias in AI systems and lack of consumer privacy have raised ethical concerns for business leaders, politicians, and regulators. Nearly one-third of CEO respondents ranked AI ethics risk as one of their top three AI concerns, according to Deloitte's 2018 State of AI and Intelligent Automation in Business Survey. </p><p>What's more, the U.S. Federal Trade Commission (FTC) addressed hidden bias in training datasets and algorithms and its effect on consumers in a 2016 report, Big Data: A Tool for Inclusion or Exclusion? Such bias could have unintended consequences on consumer access to credit, insurance, and employment, the report notes. A recent U.S. Senate bill, the Algorithmic Accountability Act of 2019, would direct the FTC to require large companies to audit their AI algorithms for bias and their datasets for privacy issues, as well as correct them. If enacted, this legislation would impact the way in which such systems are developed and validated. </p><p>Given these developments, the master audit plan of many organizations could go beyond rendering assurance on AI system integrity to evaluating compliance with new regulations. Internal auditors also may need to provide the ethical conscience to the business leaders responsible for detecting and eliminating AI system bias, much as they do for the governance of financial reporting controls. </p><p>These responsibilities may make it harder for internal audit to navigate the path to effective AI system auditing. Yet, those departments that embark on the journey may be rewarded by improved AI system integrity and enhanced professional responsibility. </p><p><em>To learn more about AI, read </em><a href="/2019/Pages/Getting-to-Know-Common-AI-Terms.aspx"><em>"Getting to Know Common AI Terms."</em></a><br></p>Dennis Applegate1
Editor's Note: On Pace With Technology's Note: On Pace With Technology<p>​The accelerating pace of technology advancements is creating significant disruption within organizations — and it appears internal auditors may not be keeping pace. A new report from The IIA, <a href="" target="_blank">OnRisk 2020: A Guide to Understanding, Aligning, and Optimizing Risk</a>, reveals that one risk area in which internal audit may be falling behind is data and new technology. </p><p>According to OnRisk, only 17% of internal auditors consider themselves knowledgeable about data and new technology, lower than the 42% of board members and 26% of those from the C-suite who consider themselves the same. For auditors to be taken seriously in the boardroom, they must address these knowledge gaps. </p><p>The OnRisk report recommends that chief audit executives "dedicate resources to better understanding how the organization is leveraging data and technology in new ways." Internal audit should be able to provide assurance on the impact of data and new technology on the "collection, management, and protection of data," the report says.</p><p>To do that, internal auditors need to ensure they're educating themselves in these areas. In that vein, auditors may want to read "<a href="/2019/Pages/Framing-AI-Audits.aspx">Framing AI Audits</a>," which takes an in-depth look at internal audit's role in assessing artificial intelligence risks and testing system controls. Also in this issue, "<a href="/2019/Pages/Bots-of-Assurance.aspx">Bots of Assurance</a>" considers how audit functions can catch up with their organizations' use of robotic process automation by deploying bots to enhance their assurance capabilities. Finally, readers may want to check out the first of a three-part series of reports coming from Deloitte and the Internal Audit Foundation on new technologies, <a href="">Moving Internal Audit Deeper Into the Digital Age: Part 1.</a> </p><p>According to OnRisk, as risks around data and new technology grow in relevance over the next five years, risk management players need to build knowledge in this area. Internal audit professionals who take their fate into their own hands and improve their tech knowledge will likely find themselves in high demand, as OnRisk also notes organizations are struggling to attract and retain talent with data and IT skills. </p><p>Finally, with this issue we say goodbye to our designer, Joe Yacinski. I have worked with Joe since I joined The IIA in late 2000, and I will greatly miss our collaborations. His thoughtful and creative approaches to the many challenging articles we've brought him over the years — how does one illustrate internal control? — have resulted in the magazine receiving numerous accolades. Joe's contributions have helped make the magazine the professional publication it is today. Joe, thank you, and we wish you well.</p>Anne Millage0
The Analytics Journey Analytics Journey<p>​If someone asked you if your internal audit department has an analytics program, would you hesitate and answer something like: "We hired an analytics person," "We bought a tool," or "We have a few tests running"? These are a team member, a tool, even some results, but none of them is a program. They are no more of a program than having someone responsible for quality would indicate that the department has a quality program.</p><p>People, tools, and results are elements of a program; they show evidence that <em>something</em> is there. But at its core, a program is a new work function, and a work function is defined by whether you know what you are doing, and why and how you do it. When internal auditors understand the function, they can explain it to others, get their support, and if needed, divide work effectively. These are the hallmarks of an analytics program.</p><h2>Five Elements</h2><p>To explain analytics programs, it is useful to break them into five elements:</p><ol><li> <strong>Program approach/fit</strong> — Why internal audit wants/has the program, and how the program fits with the mission of the organization and internal audit, in particular. </li><li> <strong>Test development process</strong> — The workflow, tools, and templates auditors use to approach these projects, get support, and track and obtain consistent results.</li><li> <strong>Development Roadmap</strong> — What internal audit will tackle first, what it will tackle next, and when it should re-evaluate the pipeline order. Also, why did the audit function choose this order?</li><li> <strong>Analytic tools and techniques</strong> — These tools and techniques start with access to data sources, and extend to analysis and communication approaches needed to process and explore the data to get the answers auditors seek. Beyond a tool, internal audit needs a toolbox.</li><li> <strong>Key Contacts</strong> — Members of the audit team and audit clients, as well as stakeholders who have interest in the results and understand the process. These contacts give auditors access to data and advice about how to interpret it. These individuals also can help with research and following up on findings. Key contacts will likely change with each new project under the program.</li></ol><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;">​ <p> <strong>Further Reading </strong></p><p>Benchmarking helps ideas flourish into new, and more powerful, concepts. When it comes to analytics programs, there are some people who know what they are talking about. Internal auditors should look at what they have to say:</p><ul><li>Manuel Coello, ACL, <a href="" target="_blank"><span class="ms-rteForeColor-10"><span>Top 100 Test</span><span>s</span></span></a> (PDF) </li><li>Deloitte, <span class="ms-rteForeColor-10"> </span> <a href="" target="_blank"> <span class="ms-rteForeColor-10"> <span> </span> <span>Continuous Monitoring and Continuous Auditing: From Idea to Implementation</span></span></a> (PDF)</li><li>Brent Dykes, <span class="ms-rteForeColor-10"> </span> <a href="" target="_blank"> <span class="ms-rteForeColor-10"> <span> </span> <span>The Two Guiding Principles for Data Quality in Digital Analytics</span></span></a></li><li>Daniel Haight, <a href="" target="_blank"><span class="ms-rteForeColor-10">The Five Faces of the Analytics Dream Team</span></a> </li><li>The IIA, <a href="" target="_blank"><span class="ms-rteForeColor-10">Data Analytics Resource Exchange</span></a></li><li>Wolters Kluwer (TeamMate), <a href="" target="_blank"><span class="ms-rteForeColor-10"> <span>Audit Technology Insights: Technology Champions, Key Strategic Enabler</span>s</span></a> (registration required) </li></ul></td></tr></tbody></table><p>Together these five elements articulate what internal audit is doing with analytics, why it does this, how it does this, and who is helping auditors make it happen. The elements enable auditors to talk about the program and rally support for it. They also define success and help internal audit track progress towards it. For example, internal audit could report that:</p><p>"The analytics program in our organization is embedded in the fraud and forensics unit of internal audit, where we use it to develop detective controls to support fraud-fighting efforts. We use Alteryx and Microsoft Power BI to combine simple red-flags tests to risk-rank transactions for human review. The red flags and their interpretation are developed in collaboration with the data owner business unit and/or benchmark with other audit shops. So far, we have tackled payroll and the procurement-to-payment cycle, and now we are scoping tests in travel and entertainment."</p><p>Note that although this is a data analytics program, access to data and business understanding are not called out as independent program elements. Instead, they are embedded in internal audit's network of key contacts. They are key elements of a project, but are not elements of the program that houses those projects.</p><h2>Is the Program Working?</h2><p>Internal audit has a successful analytics program when the department can transition it among team members without having to start again from scratch. This handover is possible because team members know what is done, why it is done, and in general terms, how to replicate it to address new problems. When internal audit reaches this stage of development, it will get extra credit if team members can recite the vision and explain what should be the next project on the horizon.</p><p>Analytics programs establish the support needed to achieve its aims: from tools and techniques, to relationships that will bring ideas on what to test and access to the data needed to do it. These relationships can support understanding of what the data is saying, and help in resolving the issues the program reveals. In the end, an analytics program is in place when it becomes "the way we do this here," and is no longer dependent on one key person to keep it going. </p>Francisco Aristiguieta0
People-centric Innovation Innovation<p>​People seem to get lost in discussions about digital transformation, yet they are at the center of today's business innovations. Technology, customization, and sustainability are the three main drivers changing consumer behavior, according to a Euromonitor International report. Meanwhile, Gartner analysts say tomorrow's technology will be people-centric.</p><p>The ubiquity of smartphones has made technology indispensable in everyday life, Euromonitor's <a href="" target="_blank">2019 Megatrends: State of Play</a> notes. Customization builds on technologies such as artificial intelligence (AI) and 5G to reinvent how people shop. Sustainability reflects growing consumer activism about environmental impacts, the London-based research firm says. </p><p>"The change in consumer demands will contribute toward a rise in investments amongst emerging economies, driving businesses to develop innovative strategies to meet those demands," says Alison Angus, head of lifestyles at Euromonitor.</p><h2>More Than Meets the Eye</h2><p>Euromonitor predicts eight changes in consumer behavior will cause the greatest disruption across industries. By analyzing these megatrends, organizations can build long-term strategies to remain relevant, the report notes.</p><p> <strong>Connected Consumers</strong> Smartphones and other connected devices have given consumers multiple ways to interact with digital content and companies, but they also have created greater dependency on those devices. In this time-pressed environment, organizations will need to ensure that consumer interactions provide value, the report advises.</p><p> <strong>Healthy Living</strong> Consumers have a holistic interest in physical, mental, and spiritual wellness. The report points to growing interest this year in health-related technology services such as genetic testing and personalized nutrition analysis.</p><p> <strong>Ethical Living</strong> Almost three in 10 consumers say purchasing eco- or ethically conscious products makes them feel good, according to Euromonitor's 2019 Lifestyles Survey. Euromonitor predicts concern about the environment, sustainability, and labor practices will be one of the most significant market disruptors in the coming years.</p><p> <strong>Shopping Reinvented</strong> Connectivity gives consumers more information about products and services, so those organizations need to be able to engage with them "anytime and anywhere," the report says. These consumers demand more personalization, budget-friendly experiences, and greater convenience.</p><p> <strong>Experience More</strong> Euromonitor points out that consumers are seeking experiences more than possessions. This is pushing businesses to emphasize experiences, experiment with marketing strategies, and become more consumer-centric.</p><p> <strong>Middle Class Retreat</strong> The middle class in developed nations is becoming more budget-conscious and selective about purchases, the report finds. Yet because the middle class remains vital to the consumer market, this megatrend may impact other megatrends.</p><p> <strong>Premiumization</strong> Consumers will spend more on the products and services that matter most to them, Euromonitor says. They are seeking more personalized and convenient service, and products that appeal to wellness and happiness.</p><p> <strong>Shifting Market Frontiers</strong> Euromonitor says economic power is shifting to fast-growing economies in Asia and Africa. Businesses investing in those regions will need strategies that are "sensitive to the environment and local communities."</p><h2>Building Smart Spaces</h2><p>Technology trends are revolving around people, too. At this month's Gartner IT Symposium in Orlando, <a href="" target="_blank">Gartner analysts identified 10 strategic technology trends</a> that may reach "a tipping point" within the next five years. All are based on the concept of creating "smart spaces" in which people and technology interact. </p><p>"Multiple elements — including people, processes, services, and things — come together in a smart space to create a more immersive, interactive, and automated experience," says David Cearly, a vice president at Gartner.</p><p> <strong>Hyperautomation</strong> Although this trend began with robotic process automation, it now involves combinations of machine learning and automation tools. In addition to tools, organizations need to understand all the steps of automation and how automation mechanisms can be coordinated, Gartner says.</p><p> <strong>Multiexperience</strong> Garter predicts technologies such as conversational platforms, virtual reality, and augmented reality will shift the user experience by 2028. This will enable businesses to provide a multisensory experience for delivering "nuanced information."</p><p> <strong>Democratization of Expertise</strong> This trend is about giving people access to technical and business expertise without extensive training. For example, Gartner expects that data analytics, application development, and design tools will become more useful for people who don't have specialized knowledge. </p><p> <strong>Human Augmentation</strong> Once the stuff of science fiction, Gartner predicts the use of technology to augment physical and cognitive abilities will grow over the next 10 years. An example would be a wearable device that provides access to information and computer applications.</p><p> <strong>Transparency and Traceability</strong> Privacy regulations and emphasis on protecting personal data have made transparency and traceability key elements of data ethics. In building these capabilities, Gartner says organizations should focus on AI, ownership and control of personal data, and ethically aligned design.</p><p> <strong>The Empowered Edge</strong> Edge computing puts information processing, content collection, and delivery closer to the sources and consumers of that information. Although currently focused on the Internet of Things, Gartner expects edge computing to expand to applications such as robots and operational systems. </p><p> <strong>Distributed Cloud </strong>This technology distributes public cloud services to other locations, in contrast to the current centralized cloud model. </p><p> <strong>Autonomous Things</strong> These physical devices use AI to automate functions once performed by people. The concept is to develop technologies that interact "more naturally with their surroundings and with people," Gartner says.</p><p> <strong>Practical Blockchain</strong> New blockchain applications such as asset tracking and identity management can provide more transparency, lower costs, and settle transactions more quickly, Gartner says. </p><p> <strong>AI Security</strong> Increased use of AI for decision-making raises new security challenges such as increasing points of attack. Gartner advises focusing on protecting AI-powered systems, using AI to enhance defenses, and looking out for AI-based attacks.</p>Tim McCollum0
Top Challenges of Automating Audit Challenges of Automating Audit<p>Organizations are rapidly adopting technologies such as cloud computing, robotic process automation (RPA), machine learning, blockchain, and cognitive computing to create tomorrow’s business in today’s market. Internal audit needs to transform its processes to keep pace with these changes, and IT audit processes are an excellent place to start this transformation.</p><p>Organizations that still perform most internal audit tasks manually complicate IT governance. In this manual model, auditors have adopted many compliance laws, policies, procedures, guidelines, and standards, along with their related control objectives. Moreover, internal audit manages audit process elements such as training, standards, risk, planning, documentation, interviews, and findings separately. </p><p>An automated internal audit process can enable the audit function to link, consolidate, and integrate the planning, performance, and response steps of the audit process into a holistic approach. The process should present audit recommendations in a way that is dynamically sustainable within the organization’s integrated action plans. </p><p>Since 2012, many standards and frameworks have changed their models, procedures, and guidelines to elaborate on the role of the IT governance process. Accordingly, internal audit should redesign its proces-ses to coincide with new, streamlined IT processes and related roles. Meanwhile, IT audit specialists should understand the interoperability among the conceptual models of IT management, governance, standards, events, audits, and planning.</p><p>Transforming audit processes comes with challenges, though. Each of these challenges can be encapsulated in a pattern of a problem and a solution, which internal audit can prioritize based on its stakeholders’ needs.</p><h2>1. Syncing the IT Audit Process With IT Project Planning </h2><p><strong>Problem:</strong> IT audit teams need a way to link, tailor, and update audit findings and recommendations for ongoing IT projects and action plans. This will be necessary for auditors to follow up on findings and identify who is responsible for carrying out audit recommendations. </p><p><strong>Solution:</strong> An automated IT audit system would break IT audit work into two levels — findings’ recommendations and their final conditions — encompassing all preventive, detective, and corrective controls. The recommended actions reported in audit findings should be linked, integrated, and synced by their related IT project’s nondisclosure agreements, service-level agreements, and contracts. Then, the automated IT audit system should confirm that management addressed the recommendation.</p><h2>2. Letting IT Governance Direct the IT Audit Process</h2><p><strong>Problem:</strong> The role of the IT audit team in corporate governance is important because the function can help bridge the gap between the business and IT in organizations. IT governance is a key part of corporate governance, which directs and monitors the finance, quality, operations, and IT functions. Three of these functions — finance, quality, and operations — are being transformed by innovative, technology-based processes. Thus, the problems are how the board and executives will design and implement a corporate governance system and how the IT governance process will be automated. </p><p><strong>Solution:</strong> Automating the IT governance process should be comprehensive and agile. In other words, the IT governance, risk, and control mapping and cascading of goals and indicators among all levels of the organization must be user-friendly. To have an agile audit function, though, these maps and cascades should be tailored based on the types of governance roles such as the board, executives, internal auditors, chief information officer, and IT manager. </p><p>The internal audit function also should map key performance indicators based on the control objectives of various regulations, standards, and frameworks into its goals. These governance requirements include frameworks from The Committee of Sponsoring Organizations of the Treadway Commission and the U.S. National Institute of Standards and Technology (NIST), industry requirements such as the Payment Card Industry Data Security Standard, and regulations such as the European Union’s General Data Protection Regulation and the U.S. Sarbanes-Oxley Act of 2002.</p><h2>3. Transforming IT Audit Processes to a DevOps Review</h2><p><strong>Problem:</strong> Nowadays, some nonfunctional requirements such as cybersecurity, machine learning, and blockchain are being inherently changed to functional requirements. This change will have fundamental effects on the IT audit process. For example, IT auditors will need to assess cybersecurity or blockchain requirements during the organization’s system development operations (DevOps) process and change their audit program schedule to fit the DevOps schedule. This change can be a real challenge, especially for small and medium-sized audit teams that lack skills and experience with DevOps. </p><p><strong>Solution:</strong> Internal audit could solve this problem by moving to an “IT audit as an embedded DevOps review service” model. As a result, the review processes for IT governance, risk, and controls must be embedded into the DevOps life cycle. As part of this process, an automated system may provide access to metadata. For example, an auditor could set up a software robot to collect evidence about risks related to vendor lock-in, changes in vendors, and data conversion. Similarly, gathering cloud provider metadata through RPA can enable internal audit to respond to other cloud-based risks.</p><p>Generally the business model must be clear, well-defined, mature, and well-documented when any kind of business, especially IT audit, wants to migrate to the cloud. The IT audit process also will be streamlining and maturing in the cloud as a system. Thus, the cloud and robotic process automation can bring an iterative business model in which the IT audit process is transformed into a cognitive computing system. This system could result in more affordable audit costs and enable IT auditors to perform more engagements each year based on automated best practices.</p><h2>4. Mitigating IT Standards’ Side Effects </h2><p><strong>Problem:</strong> Applying some IT standards is analogous to a drug interfering with other drugs and having adverse effects on a body. Without a unified medicine solution, a prescription may not provide the greatest benefits and the fewest negative effects. Likewise, internal audit should ensure the side effects of IT standards do not cause problems such as increasing compliance costs. Auditors must address two issues:</p><p>Deciding which sections of IT-related standards such as COBIT 5, ISO 2700, and NIST Special Publication 800-30 best conform with the organization’s risk management framework.</p><p>Addressing conflicts and duplications among the various standards that might result in duplicate control objectives.</p><p><strong>Solution:</strong> An automated IT audit system should use machine learning and recommendation systems to remove the similar or contradictory control objectives of IT standards. This way, the audit system can control the duplications among all of the standards’ segments and use artificial intelligence to recommend an efficient and customized set of controls. </p><h2>Transforming the Auditor</h2><p>For automation to overcome these challenges, internal auditors must transform themselves, as well. This is an area in which IT audit specialists can help organizations find, prioritize, and invest in the right innovations to automate IT, internal audit, and cybersecurity processes. Moreover, by identifying ways to automate IT governance, risk, and controls, internal audit can help the IT function align its operations with the organization’s governance and transformation processes.  <br></p>Seyyed Mohsen Hashemi1

  • AuditBoard_Jan 2020_Premium 1
  • IIA Integrated BOY_Jan 2020_Premium 2
  • IIA GAM_Jan 2020_Premium 3