Balancing Transformation and Security Transformation and Security<p>The rush to digital transformation is creating a tension between cybersecurity and innovation. Six out of 10 corporate directors say they are willing to compromise cybersecurity to meet business objectives, according to the <a href="" target="_blank">2019–2020 National Association of Corporate Directors (NACD) Public Company Governance Survey</a>. </p><p>"Boards must work with their management teams to reconcile the need to transform themselves digitally with the need to ensure underlying data assets are properly secured," says NACD CEO Peter Gleason.</p><p>In short, security must be part of the design of digital transformation, a new EY report advises. Yet, only 36% of new technology initiatives include security from the start, according to the <a href="" target="_blank">EY Global Information Security Survey 2020</a> (PDF). </p><p>That shortcoming is despite the growing recognition that security incidents are increasing, notes the survey of cybersecurity leaders from 1,300 organizations. About six out of 10 respondents say their organization has had a material or significant cybersecurity incident in the past 12 months.</p><h2>A Transformation Roadblock</h2><p>The problems are multifold, the EY survey finds. For starters, only 7% say their organization sees cybersecurity as enabling innovation. In most organizations, cybersecurity is considered the opposite — compliance-driven and risk-averse. Just 9% of new cybersecurity spending is for new business initiatives, with greater focus on defensive priorities. </p><p>That approach isn't sustainable, says Kris Lovejoy, EY global advisory cybersecurity leader. Instead, organizations need a "security by design" culture that can "bridge the divide between the security function and the C-suite," she says. In such a culture, the chief information security officer (CISO) must become the agent of transformation, "instead of the stereotypical roadblock."</p><p>To get there, cybersecurity functions will need to win over mistrustful business units. EY reports that 59% of respondents say their function's relationship with business units is neutral, mistrustful, or nonexistent. That percentage rises for key innovators such as the research and development function and marketing.</p><p>To shift the culture to security by design, EY recommends that organizations:</p><ul><li>Establish cybersecurity as a "key value enabler" of digital transformation initiatives, beginning at the planning stage.</li><li>Build trust relationships between cybersecurity and every business function.</li><li>Implement governance structures that support a "risk-centric view" in board and executive reporting.</li><li>Focus on board engagement by using understandable terms to communicate about cyber risks.</li><li>Evaluate the cybersecurity function's strengths and weaknesses.</li></ul><p> </p><h2>The Board and Cyber Risk</h2><p>Acting on those recommendations may be challenging, though, particularly where the board is involved. About half of respondents say their board doesn't understand cyber risk. EY's recent Global Board Risk Survey reports that half of boards are only somewhat confident in their organization's cybersecurity and just 54% discuss it regularly.</p><p>Board directors responding to the 2019–2020 NACD Public Company Governance survey have a higher assessment of their cybersecurity understanding. Nearly 80% say their board's understanding of cyber risk has improved significantly over the past two years, according to the survey of 500 directors, released in December. </p><p>Two-thirds say their board is confident that the organization can respond effectively to a materially significant incident. And almost two-thirds say they are confident in the board's ability to provide effective oversight over cyber risk.</p><h2>Oversight Principles</h2><p>The NACD has teamed with the Internet Security Alliance (ISA) to issue new board guidance, <a href="" target="_blank">Cyber-risk Oversight 2020</a> (PDF). This third edition of the NACD's handbook on cyber risk describes five guiding principles for addressing those risks:</p><ul><li><em>Cybersecurity as a strategic risk — rather than an IT risk.</em> Technology and data are "center stage as critical drivers of strategy," the handbook notes.</li><li><em>Legal and disclosure implications.</em> Directors need to know the legal implications of cyber risks, including what they must publicly disclose and the potential for lawsuits.</li><li><em>Board oversight structure and access to expertise.</em> Boards need adequate expertise about cybersecurity and should discuss cyber-risk management regularly.</li><li><em>An enterprise framework for managing cyber risk.</em> Directors should expect management to put in place an enterprisewide cyber-risk management framework.</li><li><em>Cybersecurity measurement and reporting.</em> The board and management should identify and quantify financial exposure to cyber risk, and determine which risks to accept, mitigate, or transfer.</li></ul><p><br></p><p>In addition to the principles, the NACD handbook includes 13 tools for board directors, which map back to individual principles. These tools include questions directors should ask about cybersecurity, a self-assessment of the board's cyber-risk oversight effectiveness, and an overview of insider threats and third-party risks. Other tools cover incident response, cybersecurity metrics, due diligence for mergers and acquisitions, dashboards, and U.S. government resources.</p><h2>Set the Tone</h2><p>"Digitalization and digital transformation have enhanced exposure to cyber risk across the enterprise, making cybersecurity a strategic risk," says Larry Clinton, president of ISA and lead author of the NACD handbook. He says boards must help set "a tone for security." More and more, boards, management, cybersecurity functions, and business units all must ensure that initiatives address both the risks and opportunities. </p>Tim McCollum0
Auditing the Bots the Bots<p>​Imagine an internal auditor who is confronted with a disastrous robotic process automation (RPA) implementation. Her company spent millions of dollars to implement 50 robots, or “bots,” but the project had yielded only a single functioning bot. Making matters worse, hackers compromised that bot and drained the company’s bank account with a succession of undetected $0.99 electronic transactions. Could the auditor have prevented these things from happening?</p><p>RPA can potentially reduce costs, improve accuracy and productivity, and eliminate tedious processes. It works by building software robots that can mimic the actions of a person on a computer, automating otherwise manual processes. </p><p>Bots are highly fragile and are not intelligent. Unlike artificial intelligence, they can only do exactly what they are told to do. And access to the technology is growing, with Microsoft recently adding RPA functionality to Microsoft Office, putting it on millions of corporate desktops.</p><p>As with any new technology, internal auditors must be aware of RPA’s risks. The potential for a bot to make a mistake multiple times in seconds creates unique risks to assess.</p><h2>Validate Security Risks</h2><p>Assessing RPA’s risks must begin with considering access security to the bot. RPA providers offer both on-premises and cloud-based solutions, with all the risks typical of these approaches. </p><p>Most RPA solutions do not house any “at rest” data, reducing the risk that sensitive data will be captured if the bot is hacked. Instead, bots operate on an organization’s applications using credentials just as a human user would. That means a bot can be hacked and coded to perform fraudulent, unethical, or hostile actions. </p><p>Examining the security around the RPA tool is critical, including access restrictions. Auditors should understand the security around each of the applications that the bot accesses  and the controls around data that the bot “writes.” </p><p>As internal auditors begin to operate within bot-enabled environments, they should consider whether the bots are achieving their business purposes. Internal audit should be a partner, along with information security, in all RPA implementations. Their independent advice should improve clarity around the business objectives for each bot development. Business analysts should establish and track clear, objective performance metrics. Auditors should provide assurance about whether the bots are fulfilling their missions and meeting compliance objectives.</p><p>An additional challenge is disagreement about segregation of duties issues around bots. Because bots lack a sense of doing “wrong,” some auditors say programming them with incompatible duties does not violate segregation of duties. Others say such programming introduces additional fraud risk because a person will have access to the bot’s program while in the production environment. Each organization should address this issue within its risk management framework and culture.</p><h2>Audit the Development Life Cycle</h2><p>Internal audit should provide assurance of the organization’s RPA developments. Development of each bot should follow the organization’s system development life cycle (SDLC).</p><p><strong>System Changes</strong> Auditors should consider both the “upstream” systems that the bot pulls data from as well as the “downstream” systems that the bot writes data to. That is because bots break easily in dynamic environments, requiring constant reprogramming and sometimes complete redevelopment. Any change in a relevant system can create an irreconcilable error in the bot’s performance. Auditors should ensure that the SDLC considers these issues.</p><p><strong>Bot Access</strong> A best practice is to have one person create and test the bot in a “sandbox” — a controlled space outside the production environment. From there, another person moves the bot into production, while a third person manages its ongoing activities. </p><p><strong>Governance</strong> Internal audit should be concerned with both ownership and governance of all active bots, looking for potential conflicts within the governance structure. Some organizations house the RPA program within IT, others at the business-unit level, and still others within a shared services area. Additionally, many organizations manage bot governance through centers of excellence that develop and manage the overall RPA strategy.</p><p><strong>Bot Activity</strong> Most RPA solutions offer audit logs to facilitate review of the transactions each user conducts during a logon session. Auditors should examine RPA user profiles to identify segregation of duties conflicts, excessive access levels, access provisioned to terminated employees, and activity conducted by terminated bots. Additional reviews of the audit logs can reveal inappropriate activities, including attempts to repurpose the bots while in production.</p><p>A common practice is to provide each bot with a set of system credentials to access the enterprise resource planning system. In reviewing audit logs for the organization’s non-RPA systems, auditors should look for irregular bot activities, as well as interactions with human credentials that might create a segregation-of-duties issue. Poor governance over RPA can allow a single person to use a bot to commit fraud.</p><h2>Managing Organizational Change</h2><p>In the story about the internal auditor faced with a poor RPA rollout, the culprit was the company’s culture. Employees had been reading articles about bots taking their jobs and fought the success of the implementation. What the company did not do well was communicate the RPA program’s objectives and achieve cultural buy-in.</p><p>A consistent theme of successful RPA implementations is beginning by automating a single, high-impact, high-visibility process. A great candidate is a highly manual, tedious process that one or more employees dread doing. Once this process is automated, it frees employees from a mundane task, enabling them to add greater value to the organization. </p><p>A further consideration for internal audit is assessing the capabilities and competencies of the internal and external personnel tasked with developing and managing the company’s RPA program. Have each of these people been trained in RPA? Are roles adequately segregated, documented, and understood? Auditors should review the credentialed training programs offered by RPA vendors and seek training, themselves.</p><h2>Improving the Odds</h2><p>Internal auditors should be frequent advisors throughout RPA initiatives. To be effective, the audit function must establish an appropriate baseline of controls around bots and include RPA in its audit plan. Moreover, auditors can provide independent advice on prioritizing the best automation opportunities. In this way, internal audit can improve cultural acceptance and improve the odds that RPA will benefit the business. </p>Chris Denver1
The Analytics Journey: Finding the Right Direction Analytics Journey: Finding the Right Direction<p>​Does your department know what it wants to achieve with analytics? This installment of <a href="/2019/Pages/The-Analytics-Journey.aspx">"The Analytics Journey"</a> series looks to stimulate internal auditors' curiosity about their department's analytics program approach. Auditors need to establish why the department wants a program, how the program supports its mission, and how it fits into the organization. </p><p>After all, if auditors don't know what they came to do, how can they do it? </p><h2>What Do You Want?</h2><p>The answer to this question will set the stage for everything else internal audit does with the analytics program. It will influence resource allocation, key milestones and time lines, expected results, and even what auditors are doing and when. Although the answer may be surprising, it will address exactly how the program fits into the organization. </p><p>These examples demonstrate a range of potential analytics program objectives. Note how some of these objectives are closely related while being very different from most of the others:</p><ul><li>The program will augment the capacity of the internal audit staff.</li><li>Analytics will drive consistency across projects in the organization.</li><li>The program will help internal audit identify and recommend process improvements to its clients.</li><li>The program will be perceived as a resource for developing the organization's analytic efforts.</li><li>The program will help internal audit review key financial transactions for signs of fraud, misuse, or abuse.</li><li>The program will focus on detective controls to support anti-fraud efforts.</li><li>The program will inform internal audit's risk assessment process and help the department prioritize future projects.</li><li>Analytics will monitor performance across the organization.</li><li>The program will help internal audit identify changes in performance by a set of specific processes.</li><li>The program will predict changes in performance by specific processes before they affect production.</li></ul><p> <br> </p><p>Reviewing any two of these objective statements reveals how small variations in the program's intent would have large impacts on its design and goals. That intent frames who should be involved in the program, what results should be expected, when they should be expected, and how will be they derived. This also reveals why analytic programs from different organizations can be successful while having different designs and obtaining different results. The program's objective results are unique based on what the department wants from it, and these will evolve over time.</p><h2>Can You Have Everything?</h2><p>What happens when internal audit wants its analytics program to meet all of these objectives? In that case, auditors should keep in mind that internal audit's objectives can evolve and the department can run parallel efforts with different emphases. However, as with any job function, the "true North" has to be clear and well-aligned between those doing the work and those asking for the work to happen.</p><p>What if internal audit wants it all anyway? Then Internal audit must choose to start somewhere, keeping in mind the Cheshire Cat's advice to Alice when she asked for directions: If you don't know where you are going, then all roads are just as good — or just as bad. </p><p>That said, if internal audit takes a couple steps on each road and then changes its mind, it will spend money and time to essentially remain where it started. Hopefully, the department will learn from each road taken to help it assess its eventual direction, as long as "exploring" is the program's intent.</p><p>To avoid taking the wrong road, internal audit should consider what success would look like before it commits resources to the program. That clarity will give the department a better idea of how to design and pursue the program, including who to make responsible, how much time and budget to allocate to the program, who else should be involved, and where the effort will be housed.</p><h2>How Do You Know When the Program Is Working? </h2><p>A good indication that the program is working is when the department is able to assess whether a proposed new analytic would be a good fit for the program. By understanding how the program fits within the organization and what the department is trying to achieve, internal audit can evaluate whether an idea is worth pursuing, should be referred to someone else, or would be best saved for later.</p><p>The program approach is the most important step on the analytics journey. Once internal audit understands what it wants from its analytics program, it has a real chance of tracing its progress and achieving its objectives. </p>Francisco Aristiguieta0
Bringing Clarity to the Foggy World of AI Clarity to the Foggy World of AI<p>In unveiling the U.S. government’s updated National Artificial Intelligence (AI) Research and Development Strategic Plan last June, U.S. Chief Technology Officer Michael Kratsios framed the reality many organizations face with AI. “The landscape for AI research and development (R&D) is becoming increasingly complex,” Kratsios said, noting the rapid advances in AI and growth in AI investments by companies, governments, and universities. “The federal government must therefore continually reevaluate its priorities for AI R&D investments to ensure that investments continue to advance the cutting edge of the field and are not duplicative of industry investments.”</p><p>Organizations are indeed investing in AI. About one-third of companies in Deloitte’s most-recent State of AI in the Enterprise survey said they were spending $5 million or more on AI technologies in fiscal year 2018. Moreover, 90% expected their level of investment to grow in 2019. These investments are occurring across all facets of business, from production and supply chain to security, finance, marketing, customer service, and internal audit. </p><p>With so much money on the line, organizations must invest the right resources in the right places to capitalize on AI. But with the technology evolving rapidly, it’s not clear how they can accurately assess AI-related risks and ensure that projects are consistent with the organization’s mission, culture, and technology strategy. In this sometimes-foggy environment, internal audit can be a valuable ally by focusing on whether the organization has a sound AI strategy and the robust governance needed to execute that strategy.<br></p><h3>Defining AI</h3><p>The definition of <em>artificial intelligence</em> is somewhat ambiguous. There is not universal agreement about what AI is and what types of technologies should be considered AI, so it’s not always clear which technologies should be in scope for internal audits.</p><p>Technologies that fall into the realm of AI include deep learning, machine learning, image recognition, natural language processing, cognitive computing, intelligence amplification, cognitive augmentation, machine augmented intelligence, and augmented intelligence. Additionally, some people include robotic process automation (RPA) under AI because of its ability to execute complex algorithms. However, RPA is not AI because bot functions must adhere strictly to predetermined rules.</p><p>When considering which technologies fall under the umbrella of AI for internal audit purposes, it is important to understand how the organization defines it. For that reason, ISACA’s Auditing Artificial Intelligence guide recommends auditors communicate proactively with stakeholders to answer the question, “What does the organization mean when it says ‘AI?’” This alignment can help auditors manage stakeholder expectations about the audit process for AI. Moreover, it may tell auditors whether the organization’s definition of AI is broad enough — or narrow enough — for it to perceive risk in the marketplace.  </p><h3>Start With Strategy</h3><p>However the organization defines AI, most guidance agrees that internal audit should focus its audits on the organization’s AI strategy and governance. Without a clearly articulated and regularly reviewed strategy, investments in AI capability will yield disappointing results. Worse, they could result in financial and reputational damage to the organization. Internal audit should confirm the existence of a documented AI strategy and assess its strength based on these considerations:</p><ul><li><em>Does the strategy clearly express the intended result of AI activities? </em>The strategy should describe a future state for the business and how AI is expected to help reach it, as opposed to AI being viewed as an end unto itself.</li><li><em>Was it developed collaboratively between business and technology leaders?</em> To provide value, AI endeavors must align business needs and technological capability. Auditors should verify whether a diverse group of stakeholders are providing input.</li><li><em>Is it consistent and compatible with the organization’s mission, values, and culture?</em> With expanding use of AI comes new ethical concerns such as data privacy. Auditors should look for evidence that the organization has considered whether planned AI uses are consistent with what the organization should be doing. </li><li><em>Does it consider the supporting competencies needed to leverage AI?</em> Successfully implementing AI requires support and expertise around IT, data governance, cybersecurity, and more. These areas should be factored into the organization’s AI strategy. </li><li><em>Is it adaptable?</em> While the cadence will vary by organization, key stakeholders should review the AI strategy periodically to confirm its viability and to ensure it accounts for emerging threats and opportunities.</li></ul><p><br>Organizations need their internal audit departments to ask these types of questions, not just once, but repeatedly. Research shows that organizations want their internal audit departments to be more forward-looking and provide more value in assessing strategic risks. Regarding supporting competencies, board members and C-level leaders are most concerned that their existing operations and infrastructure cannot adjust to meet performance expectations among “born digital” competitors, according to Protiviti’s Executive Perspectives on Top Risks 2019 report. As such, internal auditors can provide assurance that the organization’s AI strategy is appropriate and can be carried out realistically. </p><h3>Pay Attention to Data Governance</h3><p>As with any other major system, organizations need to establish governance structures for AI initiatives to ensure there is appropriate control and accountability. Such structures can help the organization determine whether AI projects are performing as expected and accomplishing their objectives. The problem is that it’s not yet clear what AI governance looks like. </p><p>According to a 2018 Internal Audit Foundation report, Artificial Intelligence: The Data Below, “There is not a template to follow to manage AI governance; the playbook has yet to be written.” Even so, the report advises internal auditors to assess the care business leaders have taken “to develop a robust governance structure in support of these applications.” That exploration should start with the data. </p><p>Big data forms the foundation of AI capability, so internal audit should pay special attention to the organization’s data governance structure. Auditors should understand how the organization ensures that its data infrastructure has the capacity to accommodate the size and complexity of AI activity set forth in the AI strategy. At the same time, auditors should review how the organization manages risks to data quality and consistency, including controls around data collection, access rights, retention, taxonomy (naming), and editing and processing rules. They also should consider security, cyber resiliency, and business continuity, and assess the organization’s preparedness to handle threats to the accuracy, completeness, and availability of data.</p><p>AI value and performance also depend on the quality and accuracy of the algorithms that define the processes that AI performs on big data. Documented methodologies for algorithm development, as well as quality controls, must be in place to ensure these algorithms are written correctly, are free from bias, and use data appropriately. Moreover, internal audit should understand how the organization validates AI system decisions and evaluate whether the organization could defend those decisions.</p><p>In addition to governance around data and AI algorithms, internal audit should examine governance structures to determine whether:</p><ul><li>Accountability, responsibility, and oversight are clearly established.</li><li>Policies and procedures are documented and are being followed.</li><li>Those with AI responsibilities have the necessary skills and expertise.</li><li>AI activities and related decisions and actions are consistent with the organization’s values, and ethical, social, and legal responsibilities.</li><li>Third-party risk management procedures are being performed around any vendors.</li></ul><h3>AI Gains Momentum</h3><p>AI poses challenges that make auditing it daunting for many internal audit functions. To audit the technology effectively, internal audit functions must have or acquire sufficient resources, knowledge, and skills. That doesn’t mean they need expert-level knowledge on staff, though. </p><p>Obtaining these capabilities has proved to be challenging. According to The IIA’s 2018 North American Pulse of Internal Audit, 78% of respondent chief audit executives indicated it was very difficult to recruit individuals with data mining and analytics skills. Nevertheless, the internal audit function should work to steadily increase its AI expertise through training and talent recruitment.</p><p>However, success in auditing AI does not depend directly on technical expertise. Instead, auditors must be able to assess strategy, governance, risk, and process quality — all things they can bring from an independent, cross-departmental point of view. </p><p>The sooner internal auditors do this, the better, because AI, in all its various forms, is gaining momentum. Soon, it will be difficult to find an area of the business that does not leverage it in some way. And although the constantly evolving technologies and risks can be dizzying, internal audit can provide sound assurance that the organization is pointing its AI investments in the right direction. <br></p>Kevin Alvero1
Privacy Law Puts California Consumers in Control Law Puts California Consumers in Control<p>​Maybe you've seen the "don't sell my data" buttons popping up on websites lately. If you live in California, you may have noticed similar signs in retail stores. They are harbingers of businesses scrambling to comply with California's new data privacy law.</p><p>The California Consumer Privacy Act (CCPA) went into effect on Jan. 1, and already it's become a mad rush. The state will start enforcing the law on July 1, but there are no rules yet. And initial compliance costs could top $55 billion, according to an economic assessment compiled for California's attorney general by Berkeley Economic Advising and Research LLC (see "CCPA and Data Privacy Resources" below right).</p><p>The CCPA is a response to a litany of data privacy breaches and concerns over how Facebook, Google, and online marketers are compiling, using, and selling consumer data. In a recent <a href="" target="_blank">Pew Research Center study</a>, 81% of respondents say they have no control over the personal data companies collect on them.</p><p>The CCPA is about giving consumers that control. Under the law, California residents have the right to:</p><ul><li>Know how organizations use their data.</li><li>Request that their data be deleted.</li><li>Opt out of having their data collected, shared, and sold.</li></ul><p> <br> </p><p>"Americans should not have to give up their digital privacy to live and thrive in this digital age," California Attorney General Xavier Becerra said in October at <a href="" target="_blank">a press conference</a> announcing draft regulations for the CCPA.</p><h2>Doing Business With California Residents</h2><p>The CCPA follows on the European Union's (EU's) General Data Privacy Regulation (GDPR), in effect since May 2018. Just as GDPR covers all EU residents, the CCPA applies to any organization that does business with California residents, even if the organization is located out of state. Organizations are subject to the law if they meet one of three conditions:</p><ul><li>Generate more than $25 million in annual revenue.</li><li>Buy, sell, or share the personal information of 50,000 or more California consumers, households, or devices.</li><li>Derive at least half of their revenue from selling consumers' personal information.</li></ul><p> <br> </p><p>Although GDPR and the CCPA are similar, one area of difference is penalties. Under GDPR, regulators can fine organizations up to 4% of annual revenue for data privacy violations. With the CCPA, fines are $2,500 per nonintentional violation and $7,500 per intentional violation. </p><p>Because each person affected counts as a violation, those amounts can multiply quickly when hundreds of thousands of California residents' data may be involved. Further, the CCPA allows individuals to sue for damages if their data is disclosed.</p><h2>Data Collectors Are Most at Risk</h2><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p> <strong>CCPA and Data Privacy Resources</strong> </p><p><em>CCPA</em><br></p><p>California Attorney General's Office, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">Standardized Regulatory Assessment: California Consumer Privacy Act of 2018 Regulations</span></a> (PDF). </p><p>California Attorney General's Office, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">California Consumer Privacy Act Regulations: Proposed Text of Regulations</span></a> (PDF).</p><p>BakerHostetler LLP and Practical Law, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">CCPA and GDPR Comparison Chart</span></a> (PDF).</p><p>International Association of Privacy Professionals, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">U.S. State Comprehensive Privacy Law Comparison</span></a>.</p><p>TrustArc, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">Essential Guide to the CCPA</span></a> (PDF).</p><p><em>Data Privacy</em><br></p><p><em>IIA Bulleti</em>n, <a href="" target="_blank"><span class="ms-rteThemeForeColor-1-0">International Data Privacy Day</span></a> (PDF).<br></p><p>U.S. National Institute of Standards and Technology, <a href="" target="_blank"> <span class="ms-rteThemeForeColor-1-0">NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management</span></a> (PDF). </p></td></tr></tbody></table><p>Organizations most likely to be impacted by the CCPA are those that collect and sell massive amounts of consumer data. At the top of that list are the big digital marketing and advertising companies. </p><p>Because consumers have to opt out of such collection under the CCPA, the law may not impact these companies' practices as much as they were by GDPR, according to Lauren Fisher, principal analyst at eMarketer in New York. That's because GDPR required consumers to opt in to data collection. "Marketers failing to uphold practices that make consumers feel comfortable with sharing data are likely to feel the effects," she explained in a <a href="" target="_blank">July 2019 eMarketer article</a>.</p><p>But it's not just the big marketers. Any company with lots of data on consumers — big companies, internet companies, and online retailers especially — is at risk. And the more consumer records they have, the bigger the risk, says Chris Babel, CEO of San Francisco-based TrustArc, which provides data privacy compliance technology. </p><p>Babel says many large global companies have to comply with GDPR, so they've had a head start on compliance, despite the differences in the two laws. But many big companies with lots of consumer data weren't impacted by GDPR because they don't do business outside the U.S. Take utility companies with their huge customer bases, for example. "They don't have more risks, but they have less time," to prepare for CCPA compliance, Babel says.</p><h2>Viewing Data From a Privacy Perspective</h2><p>The CCPA "requires businesses to fundamentally understand their data on a different level than they've ever had to before," Babel says. Typically, businesses have looked at data from a security standpoint, he explains. Their focus is on the point where the data is collected, whether it's encrypted, and where it's stored. </p><p>Babel says organizations need to look at data from a privacy perspective that considers what the data includes, how it is used, and where it flows — both within and beyond the business. That's far more complicated.</p><p>For starters, different businesses store data in different ways. One company might have lots of data but store it in a single database. Another company could have fewer records but spread them across hundreds of databases, Babel explains.</p><p>The next concern is what happens when a consumer requests to see his or her data, or asks the business to delete or stop selling it. According to the draft rules, organizations have 45 days to comply with such requests. During that time, the business must validate that the person is who he or she claims to be, locate the person's data, and comply with the request.</p><p>But that's just the data that resides within the organization. Babel says the CCPA presents substantial vendor management consequences because organizations are responsible for all the data they sell or share with other businesses. That means an organization responding to a consumer request also must contact any other organization with which it shared or sold that information so they can comply, as well.</p><p>"When you start peeling that back, layer by layer, it gets more complicated than most companies think," Babel says.</p><h2>The Drumbeat of Regulation</h2><p>But peel back the layers they must, because the drumbeat for consumer privacy protection doesn't stop with California. A similar law went into effect in Nevada in October 2019. Ten other U.S. states are currently considering consumer data privacy laws, according to the International Association of Privacy Professionals.</p><p>California's law isn't finished rolling out yet. In addition to finalizing new rules — the public comments period ended in December — there are business-to-business and employee data aspects that take effect in January 2021.</p><p>And just because California's rules aren't final, it doesn't mean organizations are off the hook. Attorney General Becerra <a href="" target="_blank">told Reuters</a> this month he will make an example of businesses that don't make efforts to comply, "to show that if you don't do it the right way, this is what is going to happen to you." </p>Tim McCollum0
The Hidden Risks of the Cloud Hidden Risks of the Cloud<p>Most large organizations are using Microsoft’s Azure cloud computing services in one form or another. Indeed, Microsoft claims more than 95% of Fortune 500 companies use Azure. Among other things, Azure supports data analytics, data warehousing, DevOps, storage, virtual desktops, and fully managed infrastructures. Additionally, organizations can integrate the services within Azure into a corporate network in the same way traditional data centers are connected. </p><p>Yet, despite Azure’s pervasiveness, many organizations don’t fully understand the effects the platform may have on daily operations and personnel, or the potential security implications. Azure’s services can introduce security and data privacy risks such as inappropriate administrative access, less clarity on role-based access permissions, or inappropriate remote access. For instance, in May 2019, Azure suffered a global outage caused by a domain name system configuration issue, according to <a href="" rel="nofollow"></a>, which covers cloud technology.</p><p>Internal audit can assist the organization in identifying the risks introduced with cloud computing. Partnering with the organization’s business units, understanding the technologies, and providing a systematic approach can help to remedy those risks. </p><h2>First Steps</h2><p>When auditing Azure, internal auditors should begin by obtaining an inventory of all Azure services in use by the organization. If an inventory does not exist, internal audit can help build one. Auditors can use native reports within Azure or custom scripts to export inventory data from the system.</p><p>Next, auditors should understand how these services are implemented, as well as IT’s control environment or processes related to cloud services. Are there documented procedures for administering the environment? Is formal change management used in all aspects of the cloud such as networking, storage, maintenance, and provisioning? </p><p>For example, with database platform as a service, auditors should understand the database platforms and how they are configured and secured. The organization may set up its own servers in an Azure virtual environment or use Microsoft’s Azure SQL server. Each method poses unique audit considerations that need to be investigated. </p><p>A third step is performing a risk analysis to determine the risks associated with each of the services and their pervasiveness. Auditors should be aware of how moving these services out of traditional data centers impacts connectivity, communication requirements, separation of duties, latency, response time, administrative security, and compliance. Whenever possible, auditors should partner with IT to monitor key performance indicators based on risk to assist with ongoing control monitoring and operations. </p><h2>A Plan for the Cloud </h2><p>Once internal auditors have completed these three steps, they are ready to build their audit plan. In doing so, auditors need to address several aspects of the Azure platform.<br></p><p><strong>Azure Security Center</strong> Internal audit, IT, or management can quickly identify the organization’s Secure Score — which measures its security posture — through the Azure Security Center. The center provides security recommendations based on the organization’s current configurations and monitors system updates, vulnerabilities, network security, and other areas. </p><p>In addition, Security Center prioritizes recommendations, so auditors know where to start with their assessment. The dashboard groups the organization’s security hygiene into categories such as compute and apps, networking, data and storage, identity and access, and security solutions. Auditors should note that the dashboard and associated recommendations are alerts rather than enforced security configurations. <br></p><p><strong>Networking and Virtual Machines</strong> Cloud environments can be complex with virtual networking, firewalls, and machines configured from a browser or Microsoft’s Azure PowerShell scripting language. Azure administration can be performed via a web browser, and workloads can be administered remotely using many other secure and insecure methods. </p><p>Internal audit can help the organization take a strategic approach to risk by validating that remote access to the environment is restricted appropriately and Azure access is secured with multifactor authentication. Simple passwords can be stolen, compromised, or “brute-force” attacked. Once one mach-ine is compromised, it can be used to compromise other Azure resources or attack other networked devices. Multifactor authentication goes beyond passwords by requiring more than one method of authorization for access. In addition to multifactor authorization, all administrative workload access from the internet should be configured for just-in-time security access, which builds secure connections over the internet. <br></p><p><strong>Azure Active Directory</strong> With more than one billion user identities hosted, Azure Active Directory is one of the most pervasive organizational risks for businesses using the platform. Services such as SQL databases, data warehouses, and virtual machines all leverage Azure Active Directory, as do Office applications. </p><p>Depending on how the organization has implemented Azure Active Directory, it can pose significant administrative access risks. Traditionally, when reviewing administrators for on-premises Active Directory, auditors will evaluate enterprise administrators and domain administrators. However, with Azure Active Directory, there are potentially global administrative accounts. These global accounts could create an account with elevated permissions on the organization’s domain. Moreover, they are unlikely to appear in any traditional audit script outputs. </p><p>On top of this, in Azure, administrators can create custom groups that have less visibility in the environment. Auditors need to fully understand the risk and compliance implications of these custom groups.<br></p><p>Database Services Depending on how the organization stores its databases within Azure, it may have access to database security features such as logging, log retention, data encryption, and restricted elevated access. Auditors should understand which features are in place and how they are monitored.</p><h2>Security Assurance</h2><p>In addition to the security concerns in the previous section, internal auditors should review areas such as data loss prevention, data classification, encryption, and Azure certifications and compliance. Compliance may include the International Organization for Standardization’s ISO 27001, System and Organization Control (SOC) reports, the U.S. Health Insurance Portability and Accountability Act, and Payment Card Industry Data Security Standard. </p><p>Because these services are complex, internal audit could perform smaller audits around specific areas one at a time. For example, auditors could separate networking, Azure Active Directory, and Security Center into their own audits and prioritize them based on risk. Auditors can leverage free Azure benchmarks issued by the Center for Internet Security and Azure’s SOC reports when building out audit plans. </p><p>Auditing the Azure environment can be challenging because of the platform’s constantly changing and complex design. Internal audit may need to hire outside expertise to evaluate the design and operation of controls in these environments. But by overcoming these challenges and performing audits, internal audit can provide assurance that cloud operations are secure. <br></p>Kari Zahar1
Bots of Assurance of Assurance<p>As important as it is, internal auditing involves a lot of repetitive work to provide assurance and achieve the department’s objectives. There is supporting evidence to request, data to gather, workpaper templates to create, and controls to test. But imagine if these basic tasks could be automated.<br></p><p>That is the promise of robotic process automation (RPA). Many internal audit functions are looking to RPA to multiply the capacity of their teams. These departments are following the lead of the growing number of organizations that are using robots, or bots, to automate business processes — particularly repetitive and often time-consuming process steps. </p><p>RPA can help streamline processes by making them more efficient and more robust against errors. That may be one reason that 40% of internal auditors reported that their organizations currently use RPA in business operations in a poll taken during The IIA’s 2019 International Conference in Anaheim, Calif.</p><p>Audit functions can catch up with their organizations’ use of RPA by deploying bots as a digital workforce to enhance their assurance capabilities. Moreover, RPA can free internal audit’s experts from the drudgery of repetitive activities to focus on critical thinking tasks and managing exceptions.</p><h2>What’s in a Bot?</h2><p>RPA involves software that autonomously executes a predefined chain of steps in digital systems, under human management. Common capabilities of bots include filling in forms, making calculations, reading and writing to databases, gathering data from web browsers, and connecting to automated programming interfaces. They also can apply different logical rules such as “if, then, else” or “do while.” And those bots don’t sleep, tire, forget, complain, or quit.</p><p>With RPA, bots improve over time as people specify the underlying rules, but they cannot learn on their own. Conversely, cognitive automation learns and improves its own algorithms over time based on the given data and experience. </p><p>RPA solutions can deliver benefits such as:</p><ul><li>Increased efficiency, especially in situations that once involved repetitive and recurring manual work processes.</li><li>Increased effectiveness and robustness of processes that previously were prone to high error rates.</li></ul><p><br>Organizations are most likely to realize these benefits when they use structured data, which provides the predefined instructions bots need to handle work scenarios. </p><h2>Five Types of Uses</h2><p>Internal audit departments may be slower than their organizations, as a whole, to deploy RPA, but there are many ways they can put the technology to use. Although these applications may differ, depending on each department’s circumstances and capabilities, they can be classified into five categories.<br></p><p><strong>Support</strong> This category of applications enables internal auditors to perform or document an audit procedure such as creating workpaper templates. One example of a support application is a bot that downloads attachments. Internal auditors spend a lot of time pulling supporting evidence from electronic sources or waiting for audit clients to do so manually. In a typical enterprise resource planning (ERP) system, auditors may need to take as many as 10 steps to access an electronic attachment. These steps include opening the ERP browser, typing the transaction code, entering the document number and company code, adding the fiscal year, going to the attachments, choosing the correct file path, and entering a file name that complies with a predefined structure.</p><table cellspacing="0" width="100%" class="ms-rteTable-default"><tbody><tr><td class="ms-rteTable-default" style="width:100%;">​<strong>Bot Programming</strong><style> p.p1 { line-height:12.0px; font:42.5px 'Interstate Light'; } p.p2 { line-height:12.0px; font:9.0px 'Interstate Light'; } p.p3 { text-indent:12.0px; line-height:12.0px; font:9.0px 'Interstate Light'; } span.s1 { letter-spacing:-0.1px; } </style><p><br>When setting up a bot, auditors not only must list the different processing steps, but also state how to get from one step to the next. For example, to access an electronic attachment, from the step where the ERP browser is opened, auditors instruct a bot to type in the transaction code, followed by pressing “enter.” The bot follows the same process as a human user to enter the document number, company code, and fiscal year. Each of the first two entries is followed by pressing “tab.” The third entry is followed by pressing “execute.” </p><p>From there, the bot clicks the attachment button, followed by clicking “Attachment List,” and double-clicking on the attachment file. Auditors specify a predefined valid file path for the bot to follow. Then, they instruct the bot to enter the file name and click “save.” Putting these steps into a loop sequence directs the bot to go through the activities over and over for each document specified in the source listing.<br></p></td></tr></tbody></table><p>A downloading attachment bot supports internal auditors by pulling electronic attachments automatically and more quickly — in less than 10 seconds per transaction. This can accelerate audit procedures related to vendor invoices, for example. In this context, the bot can support auditors in reviewing potential duplicate payments not yet returned, invoice approvals that are not workflow based, and invoice verification as part of a purchase-to-pay process audit. “Bot Programming,” at right, describes how auditors can use rules to set up a bot. <br></p><p><strong>Validation</strong> Bots in this category validate the accuracy or completeness of transactions under review. An example is a distance bot that validates mileage allowances for a full population of business trips, rather than by sampling. To calculate the distance between the starting point and destination manually using a geographical map service would take up to five steps. These steps include opening the web browser, typing in the starting point and destination address, and copying the distance displayed before continuing with the next distance. </p><p>The distance bot supports internal auditors by pulling as-is distances from the system automatically. This bot is good for performing travel expense audits, particularly in organizations with high expenses from mileage allowances.<br></p><p><strong>Control Testing</strong> This category of bots performs all or selected testing steps or attributes for internal controls, especially for IT application controls and IT general controls. Organizations often have a clear picture of the “to be” status of these controls. By translating this clear picture into rule-based procedures, auditors can program bots to test both the design and operating effectiveness of such controls. Bots can quickly identify inappropriate settings organizationwide. For example, within a purchase-to-process audit, bots can test IT application controls such as the duplicate-invoice check and the three-way-match, and prepare standardized audit evidence. <br></p><p><strong>Data Generation</strong> For internal audits requiring access to extended data sets, bots in the data generation category provide access to new data sources such as electronic attachments and temporary data sets. Data extraction bots support upgraded analytics and can reduce false positives by considering new data sources. This capability can reduce follow-up activities for false positives while increasing efficiency. For example, these bots can extract data from PDF text in less than one second and from image files in less than three seconds.<br></p><p><strong>Reporting</strong> Auditors can use bots in this category to create reports or operate follow-up procedures. If internal audit does not use specialty audit software — or plan to introduce it — bots can automate repetitive activities such as report creation based on an audit program and sending follow-up reminders and inquiries.</p><h2>Plan for the Pitfalls</h2><p>The previous examples demonstrate how bots can enable the internal audit function to accomplish results more quickly and without human errors. While the improvements may outweigh the implementation costs, internal audit should be aware of risks across three dimensions: operations, reporting, and compliance. Internal auditors should manage these risks from the beginning and throughout the implementation of RPA. They should start by addressing some common pitfalls.<br></p><p><strong>Disregarding Other Automation Possibilities</strong> Do not automate audit procedures with RPA when other affordable software or more advantageous automation possibilities are available. For example, specialty audit software may be used for reporting and follow-up activities.<br></p><p><strong>Outsourcing Full Bot Programming</strong> RPA bots can be improved over time as auditors specify rule-based procedures to reduce the number of false positives and false negatives. Outsourcing this programming can make internal audit dependent on a third party to establish the logic followed by each bot. Instead, internal audit should obtain advice from external parties, if needed, while keeping most bot programming in-house.<br></p><p><strong>Complying With the RPA Tool’s Terms of Use</strong> Software license terms may prevent internal audit from taking an existing RPA tool used in selected subsidiaries and using it for organizationwide audits. Typically, the license is for the licensee’s (subsidiary’s) direct business purposes — not for all affiliates across the organization. Examine the terms of use carefully.</p><h2>Starting With Bots</h2><p>Knowledge of RPA’s benefits and risks can prepare internal audit to explore the technology’s potential. These tips can help internal audit get started.<br></p><p><strong>Identify Use Cases</strong> Auditors should begin by identifying their department’s recurring activities. Where is time lost because of repetitive activities? Where does the department want to provide higher assurance by increasing sample sizes or extending substantive audit procedures? This identification exercise should be separate from the discussion about how to automate internal audit activities. It also may comprise both full and partial automation.</p><p>Internal audit can use workshops to identify automation opportunities. During these sessions, auditors can use a matrix to prioritize cases based on the potential benefits of automation and the feasibility of doing so. Mapping automation opportunities by end-to-end processes usually doesn’t pay off. Instead, internal audit should map subprocesses or process variants because these are at an actionable level. However, not all subprocesses or variants are an opportunity for automation. </p><p>In addition, internal audit should not create silos between different automation possibilities. When assessing use cases, internal audit should consider RPA as one alternative among many. <br></p><p><strong>Assess the Internal RPA Landscape</strong> Because internal audit is not usually the early adopter for RPA within organizations, the department should identify tools and resources already in use. To realize RPA’s full potential, auditors should assess the various tools on the market. </p><p>Instead of going on its own, internal audit can partner with the organization’s existing RPA users to develop a pilot to demonstrate how RPA can be used in audits. Choosing a use case that allows internal audit to quantify its benefits can support internal discussions and decisions about using RPA.<br></p><p><strong>Motivate the Internal Audit Team</strong> The pilot’s results and the possibilities of learning from RPA are two main drivers for motivating the internal audit team to apply the technology. Demonstrating learning opportunities is easy by using online tutorials, community forums, and free trial versions. These resources can provide online training and enable internal auditors to become familiar with RPA tools. Trial versions, in particular, can show auditors how easy it is to use the tool, which can motivate them to use it.</p><h2>RPA in Alignment</h2><p>In addition to these three tips for getting started, internal audit should create an implementation plan and align RPA with its overall digital labor strategy. This plan should balance an understanding of the technology’s risks with the benefits of target-oriented approaches to implementing it. </p><p>To realize RPA’s benefits in the long run, internal audit should deploy it from a governance perspective. The board’s support can especially enable the chief audit executive to develop a clear plan for automating different internal audit processes. Because other business functions may be using RPA, internal audit needs to align its RPA implantation with these existing activities to generate synergies and avoid duplication of efforts. That understanding can position internal audit to put RPA to use and also drive effective reviews of the organization’s RPA program. <br></p>Justin Pawlowski1
Governments Under Cyber Siege Under Cyber Siege<p>​There's trouble down on the Bayou. Last Friday, city officials in New Orleans acted quickly to try to stave off a cyberattack on city government computers. A public address announcement at City Hall ordered city employees to shut down their computers that morning after phishing emails seeking passwords were discovered, <a href="" target="_blank">the Associated Press reports</a>. So far, the city has not received a ransom demand, but state and federal law enforcement officials are investigating.</p><p>Just last month, the governor of Louisiana declared a state of emergency after a ransomware attack on servers at the state's Office of Motor Vehicles. The state responded by shutting down server traffic to neutralize the attack, <a href="" target="_blank"><em>Business Insider</em> reports</a>. "These protective actions likely saved the state from data loss and weeks of server outages," officials said in a press release.</p><p>The attacks in New Orleans and in the state capital in Baton Rouge are reminders that municipal and state governments are prime targets of cyber criminals. A few days before the New Orleans attack, a ransomware attack compromised city government computers in Pensacola, Fla., impacting government services such as online payments. In the past two years, Atlanta and Baltimore suffered similar attacks that severely harmed city government systems and impeded public services.</p><h2>The Ransomware Threat</h2><p>Ransomware attacks encrypt data on compromised systems and then demand payment to release it. Phishing emails and malware typically are weapons for spreading ransomware. They are among the most common threat types detected by organizations, according to the <a href="" target="_blank">2019 Cybersecurity Report Card</a> from threat-investigation technology company DomainTools. </p><p>Ransomware has targeted companies, governments, hospitals, and other organizations. In some cases, organizations have agreed to pay the ransom, although law enforcement officials and security experts advise against doing so. Forrester Research forecasts that ransomware incidents will increase in 2020, as attackers seek to cash in by targeting consumer devices and "demanding ransom from the [device] manufacturer."</p><h2>Weaponizing Data</h2><p>Attackers are getting more sophisticated, too. Forrester predicts attackers will "weaponize" data and artificial intelligence in the coming year. With companies compiling ever-more data to gain insights, attackers have greater incentive to go after that data, Forrester notes in its <a href="" target="_blank">Predictions 2020 report</a>. Moreover, technologies such as the Internet of Things come with fewer controls, expanding access for attacks. </p><p>"Simply put, there are more attackers with more sophisticated tools aimed at a larger attack surface," Forrester says. "And those attackers want enterprises to pay."</p><p>That financial risk should get the attention of senior executives and boards, as well. That's the focus of a new Committee of Sponsoring Organizations of the Treadway Commission report, <a href="" target="_blank">Managing Cyber Risk in a Digital Age</a> (PDF). The Deloitte-authored report details how organizations can apply the <em>Enterprise Risk Management–Integrating With Strategy and Performance</em> framework to cyber risk.</p><h2>Quick Thinking</h2><p>In New Orleans, city officials decided to shut down systems soon after the city discovered the attack. Officials said the city backs up financial records on a cloud-based system and the city's emergency services were using telephones and radios to operate while systems were down. "We will go back to marker boards. We will go back to paper," Collin Arnold, the city's homeland security director told the Associated Press. </p>Tim McCollum0
Getting to Know Common AI Terms to Know Common AI Terms<p>​Artificial intelligence (AI) systems development and operation involve terms and techniques that may be new to some internal auditors, or that contain meanings or applications that are different from their normal audit usage. Each of the terms below have a long history in the development and execution of AI processes. As such they can promote a common understanding of AI terms that can be applied to <a href="/2019/Pages/Framing-AI-Audits.aspx">auditing these systems</a>. </p><h2>Locked Datasets </h2><p>Datasets are difficult to create because independent judges should review their features and uses, and then validate them for correctness. These judgments drive the system in the training phase of system development, and if the data is not validated, the system may learn based on errors. </p><p>In machine learning systems, datasets are normally "locked," meaning data is not changed to fit the algorithm. Instead, the algorithm is changed based on the system predictions derived from the data. As a safety precaution, data scientists usually are barred from examining the datasets to determine the reasons for such changes. This prevents them from biasing the algorithm given their understanding of the data relationships.  </p><p>Consider a system that reviews the ZIP codes of business accounts. The system may fail to recognize ZIP codes beginning with "0," such as 01001 for Agawam, Mass., or that contain alphanumeric characters such as V6C1H2 for Vancouver, B.C. Locking the dataset prevents the data scientists from inspecting the errors directly. Instead, they would have to investigate why the system is interpreting some accounts differently than others and whether the algorithm contains a defect. Barring data scientists in this way is another form of locking the dataset.  </p><h2>Third-party Judges</h2><p>Because historical datasets are not always verified before AI system use, the internal auditor needs to ensure an appropriate validation process is in place to confirm data integrity. Use of automated systems to judge data integrity may mask AI issues that adversely affect the quality of the output.  </p><p>Therefore, a customary practice in the industry has been to use independent, third-party judges for validation purposes. The judges, however, must have sufficient expertise in the data domain of the system to render valid test results. If they use algorithms as part of their validation process, then those, too, must be validated independently. Usually any inconsistency in the test results during judging is reviewed and reconciled as part of the process. A well-designed validation process will help avoid user acceptance of system outcomes that are inherently flawed.</p><h2>Overfitting and Trimming</h2><p>The data scientist selects datasets to train the AI system that are intended to reflect the actual data domain. Sometimes those datasets reflect ambiguous conditions that should be trimmed or deleted to enhance the probability of error-free results. </p><p>For example, the first name "Pat" can apply to either gender. To avoid system confusion, the data scientist would likely trim it from the training dataset. However, the first name "Tracy," although historically applicable to both male and female, is more commonly a female name. Trimming "Tracy" from the training datasets might bias system outcomes toward males without eliminating much ambiguity when the production data is processed.  </p><p>The problem with trimming is that it can cause data overfit in an algorithm and biased system results during the production phase. Data overfit occurs when the training dataset is trimmed to derive a particular algorithm, rather than the algorithm adjusting itself to a training dataset that represents the actual data domain. The resulting algorithm is not based on a representative data domain. Internal audits should examine process controls over the training dataset to safeguard against data overfit caused by excessive data trimming designed to achieve a desired algorithmic outcome.   </p><h2>Outliers</h2><p>It is important for the data scientist to examine data outliers. For example, a machine learning system may be 90% accurate in correcting misspelled words, but it also may flag numbers as errors and correct them. Those corrections can cause havoc with critical documents, such as financial reports, if the data scientist failed to review system predictions for such outliers.</p><h2>Metrics</h2><p>Performance metrics should be used to assess AI system accuracy (How close are the predictions to the true values?) and precision (How consistent are the outcomes between system iterations?). Such metrics are a best practice, because they indicate performance issues in AI system operations, including:</p><ul><li>False positives: identifying an unacceptable item as correct.</li><li>False negatives: identifying an acceptable item as incorrect.</li><li>Missed items: not addressing all items in the population. </li></ul><p> <br> </p><p>A formal review process to cover these issues improves system performance and helps decrease audit risk.   </p><p>Putting in place accuracy and precision metrics is a best practice when evaluating AI systems. Although these metrics show how well a system finds issues, they do not tell the entire story. In addition, measurements to identify issues missed (false negatives), incorrect identification of issues (false positives), and where an issue exists in the data, but the system failed to detect the issue (missing issues), are needed to measure the full performance of a system.</p><h2>User Interpretation</h2><p>Internal auditors must be careful to safeguard the integrity of the AI audit from user misinterpretations of system outcomes. That is because the system may generate supportable conclusions that are simply misunderstood or ignored. </p><p>For instance, if a system were to predict that jungle fires are related to climate change, this does not confirm that climate change has caused the jungle fires. Earlier this year, news organizations reported that climate change caused fires in the Amazon jungle. However, <a href="" target="_blank">NASA had asserted</a> that the fires were the same as previous years with no change over time and with no relation to global warming. While there might be a correlation between the two, causation should not be inferred from the system prediction.  </p><p>Internal auditors need to take the human factor into account when assessing system quality.  System users may simply refuse to believe or act upon system predictions because of bias, personal preference, or preconceived notions. </p>Dennis Applegate1
Framing AI Audits AI Audits<p>​Artificial intelligence (AI) is transforming business operations in myriad ways, from helping companies set product prices to extending credit based on customer behavior. Although still in its nascent stage, organizations are using AI to rank money-laundering schemes by degree of risk based on the nature of the transaction, according to a July EY analytics article. Others are leveraging AI to predict employee expense abuse based on the expense type and vendors involved. Small wonder that McKinsey & Company estimates that the technology could add $13 trillion per year in economic output worldwide by 2030. </p><p>If AI is not on internal audit's risk assessment radar now, it will be soon. As AI transitions from experimental to operational, organizations will increasingly use it to predict outcomes supporting management decision-making. Internal audit departments will need to provide management assurance that the predicted outcomes are reasonable by assessing AI risks and testing system controls.</p><h2>Evolving Technology</h2><p>AI uses two types of technologies for predictive analytics — static systems and machine learning. Static systems are relatively straightforward to audit, because with each system iteration, the predicted outcome will be consistent based on the datasets processed and the algorithm involved. If an algorithm is designed to add a column of numbers, it remains the same regardless of the number of rows in the column. Internal auditors normally test static systems by comparing the expected result to the actual result. </p><p>By contrast, there is no such thing as an expected result in machine learning systems. Results are based on probability rather than absolute correctness. For example, the results of a Google search that float to the top of the list are those that are most often selected in prior searches, reflecting the most-clicked links but not necessarily the preferred choice. Because the prediction is based on millions of previous searches, the probability is high — though not necessarily certain — that one of those top links is an acceptable choice. </p><p>Unlike static systems, the Google algorithm, itself, may evolve, resulting in potentially different outcomes for the same question when asked at different intervals. In machine learning, the system "learns" what the best prediction should be, and that prediction will be used in the next system iteration to establish a new set of outcome probabilities. The very unpredictability of the system output increases audit risk absent effective controls over the validity of the prediction. For that reason, internal auditors should consider a range of issues, risks, controls, and tests when providing assurance for an AI business system that uses machine learning for its predictions.</p><h2>AI System Development</h2><p><img src="/2019/PublishingImages/Applegate_Three-Phases.jpg" class="ms-rtePosition-2" alt="" style="margin:5px;width:500px;height:251px;" />The proficiency and due professional care standards of the International Professional Practices Framework require internal auditors to understand AI concepts and terms, as well as the phases of development, when planning an AI audit (see "Three Phases of Development," right). Because data fuels these systems, auditors must understand AI approaches to data analysis, including their effect on the system algorithm and its precision in generating outcome probabilities.</p><p> <em>Features</em> define the kinds of data for a system that would generate the best outcome. If the system objective is to flag employee expense reports for review, the features selected would be those that help predict the highest payment risk. These could include the nature of the business expense, vendors and dollar amounts involved, day and time reported, employee position, prior transactions, management authorization, and budget impact. A data scientist with expertise in this business problem would set the confidence level and predictive values and then let the system learn which features best determine the expense reports to flag.</p><p> <em>Labels</em> represent data points that a system would use to name a past outcome. For instance, based on historical data, one of the labels for entertainment expenses might be "New York dinner theater on Saturday night." The system then would know such expenses were incurred for this purpose on that night in the past and would use this data point to predict likely expense reports that might require close review before payment. </p><p> <em>Feature engineering</em> delimits the features selected to a critical few. Rather than provide a correct solution to a given problem, such as which business expense reports contain errors or fraud, machine learning calculates the probability that a given outcome is correct. In this case, the system would calculate which expense reports are likely to contain the highest probability of errors or fraud based on the features selected. The system then would rank the outcomes in descending order of probability. </p><p> <em>Machine learning</em> involves merging selected features and outcome labels from diverse datasets to train a system to generate a model that will predict a relationship between a set of features and a given label. The resulting algorithm and model are then refined in the testing phase using additional datasets. This phase may consider hundreds of features at once to discover which features yield the highest outcome probability based on the assigned labels. </p><p>Feature engineering then deletes the number of system features to enhance the precision of the outcome probabilities. Based on the testing phase, for example, the nature of the expense, the dollar amounts involved, and the level of the employee's position may best indicate high-risk business expense reports requiring close review. During the production phase, as the system calculates the risk of errors and fraud in actual expense reports, it may modify the algorithm based on actual output probabilities to improve the accuracy of future predictions. Doing so would create continuous system learning not seen in static systems. </p><p>In AI system development, it is important for organizations to establish an effective control environment, including accountability for compliance with corporate policies. This environment also should comprise safeguards over user access to proprietary or sensitive data, and performance metrics to measure the quality of the system output and user acceptance of system results. </p><h2>A Risk/Control Audit Framework</h2><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p> <strong>Training Phase</strong></p><p>Considerations for adjusting the assessed level of AI audit risk include:</p><ul><li>If system reviews are in place to evaluate training data modifications, deletions, or trimming, this condition should help prevent overfitting the training dataset to generate a desired result, reducing audit risk.<br></li><li>New AI systems may use datasets of existing systems for reasons of time and cost. Such datasets, however, may contain bias and not include the kinds of data needed to generate the best system outcomes, increasing audit risk.<br> </li><li>AI datasets that consist of numerous data records should contain some errors. In fact, an error-free dataset would indicate a bad dataset, because the occurrence of errors should match the natural rate. For example, if 5% of employee expense reports are filled in incorrectly and are missing key data, then the training dataset should contain a similar frequency. If not, then audit risk increases. ​</li></ul></td></tr></tbody></table><p>Nine procedures frame the audit of an AI system during the training, testing, and production phases of development. The framework provides a point of departure for AI audit planning and execution. Assessed risk drives the controls expected and subsequent internal auditor testing. </p><p>Internal auditors may need to adjust the procedures based on their preliminary survey of the AI system under audit, including a documented understanding of the system development process and an analysis of the relevant system risks and controls. Moreover, as auditors complete and document more of these audits, it may be necessary to adjust the framework.</p><p>Normally, internal auditors adjust their assessment of risk and their resulting audit project plans based on observations made in the preliminary audit survey. The boxes, at right, depict conditions that may alter assessed risk as well as modify expected AI system controls and subsequent audit testing during specific phases of development. </p><p> <strong>Data Bias (Training Phase)</strong> Use of datasets that are not representative of the true population may create bias in the system predictions. Bias risk also can result from failing to provide appropriate examples for the system application.</p><p>A control for data bias is to establish a system review and approval process to ensure there are verifiable datasets and system probabilities that represent the actual data conditions expected over the life of the system. Audit tests of control include ensuring that: </p><p></p><ul><li>Qualified data scientists have judged the datasets.</li><li>The confidence level and predictive values are reasonable given the data domain.</li><li>Overfitting has not biased system predictions. </li></ul><p> <br> <strong>Data Recycling (Training)</strong> This risk can happen when developers recycle the wrong datasets for a new application, or impair the performance or maintenance of existing systems by using those datasets to create or update a new application.</p><p>One control for data bias is independently examining repurposed data for compliance with contractual or other requirements. In addition, organizations can determine whether adjustments in the repurposed data have been made without impacting other applications. </p><p>Examples of control tests are: </p><p></p><ul><li>Evaluating the nature, timing, and extent of the independent examinations.</li><li>Testing the records of other applications for performance or maintenance issues that stem from the mutually shared datasets.</li></ul><p> <br> <strong>Data Origin (Training)</strong> Unauthorized or inappropriately sourced datasets can increase the risk of irrelevant, inaccurate, or incomplete system predictions during the production phase.</p> <p></p><p>To control this risk, the organization should inspect datasets for origin and relevance, as well as compliance with contractual agreements, company protocols, or usage restrictions. The results of these inspections should be documented. </p><p>To test controls, auditors should:</p><p></p><ul><li>Review data source agreements to ensure use of datasets is consistent with contract terms and company policy.</li><li>Examine the quality of the inspection reports, focusing on the propriety of data trimmed from the datasets.<br><br></li></ul><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p> <strong>​</strong><strong>Testing Phase</strong></p><p>Considerations for adjusting the assessed level of AI audit risk include:</p><ul><li>If independent, third-party judges tested the system data, but no process is in place to reconcile differences in test results between judges, then audit risk increases. </li><li>Because system predictions are based on probability, perfect test results are not possible. If third-party judges evaluating the test results find no issues, then data overfit may have occurred, increasing audit risk. </li><li>If the system has not been validated to prevent user misinterpretations caused by incorrect data relationships, such as flagging business expense reports based on employee gender, then audit risk increases. Alternatively, if user interpretations based on system predictions have not been validated to ensure system data supports the interpretation, then audit risk also increases. </li><li>If data scientists fail to use representative datasets with examples involving critical scenarios to train the system, then audit risk increases. </li><li>If the datasets are not locked during testing, then the data scientist may adjust the algorithm to inadvertently process the data in a biased manner, increasing audit risk.</li><li>If the datasets are locked during testing, but the data scientist fails to review the actual system prediction for integrity, then audit risk increases. </li></ul></td></tr></tbody></table><p> <strong>Data Conclusion (Testing Phase)</strong> Inappropriately tested data relationships could result in improper system conclusions that are based on incorrect assumptions about the data. These conclusions could create bias in management decisions.</p><p>The control for this risk is to ensure each feature of the system contains data for which the purpose has been approved for use. Developers should assess the results of such data for misinterpretation and correct it, as appropriate. </p><p>Testing this control involves reviewing user interpretations and subsequent management decisions based on system predictions. By performing this test, organizations can ensure that the data supports the conclusions reached and decisions made by management.</p><p> <strong>Data Overfit (Testing)</strong> With this issue, the risk is that datasets may not reflect the actual data domain. Specifically, data outliers may have been trimmed during system testing, leading to a condition that overfits the algorithm to a biased dataset. That could cause the system to respond poorly during the production phase.</p><p>Organizations can control for this risk by validating datasets in system testing to ensure that the samples used represent all possible scenarios and that the datasets were modified appropriately to obtain the currently desired system outcome.</p><p>To test this control, internal auditors should review all outlier, rejected, or trimmed data to ensure that:</p><p></p><ul><li>Relevant data has not been trimmed from datasets.</li><li>Datasets remain locked throughout testing.</li><li>The algorithm has processed the data in an unbiased way.</li></ul><p> <br> <strong>Data Validation (Testing) </strong>Failure to validate datasets for integrity through automated systems or independent, third-party judges can lead to unsupported management decisions or regulatory violations. An example would be allowing the personal data of European Union (EU) citizens to be accessed outside of the EU in violation of Europe's General Data Protection Regulation.</p><p>Organizations can control for this risk by implementing a validation process that compares datasets to the underlying source data. If the organization uses automated systems, it should ensure the process reveals all underlying issues affecting the quality of the system output. If the organization uses independent, third-party judges, it should ensure the process allows judges the access they need to the raw data inputs and outputs.</p><p>To test these controls, internal auditors should:</p><p></p><ul><li>Assess the process and conditions under which the validation took place, assuring that all high-risk datasets used in the system were validated.</li><li>Confirm randomly selected datasets with underlying source data.</li><li>When datasets are based on current system data, validate such data is correct to avert a flawed assessment of actual system data.</li></ul><p> <br> <strong>Data Processing (Production Phase)</strong> Failing to validate internal systems processing can cause inconsistent, incomplete, or incorrect reporting output and user decisions. However, periodically reviewing and validating input and output data at critical points in the data pipeline can mitigate this risk and ensure processing is in accordance with the system design. </p><p>Auditors can test this control by:</p><ul><li>Reconstructing selected data output from the same data input to validate system outcomes.</li><li>Performing the system operation again.</li><li>Using the results to reassess system risk.</li></ul><p> <br> <strong>Data Performance (Production)</strong> If there is a lack of performance metrics to assess the quality of system output, the organization will fail to detect issues that diminish user acceptance of system results. For example, an AI system could fail to address government tax or environmental regulations over business activity. </p><p>Controlling data performance risk requires organizations to establish metrics to evaluate system performance in both the training and production phases. Such metrics should include the nature and extent of false positives, false negatives, and missed items. In addition, developers should implement a feedback loop for users to report system errors directly, among other performance measures. </p><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p>​<strong>Production Phase</strong></p><p>Considerations for adjusting the assessed level of AI audit risk include:</p><ul><li>Systems that leverage the datasets of existing systems already audited should lower overall audit risk and not require as much audit testing as new systems using datasets not previously audited.<br></li><li>Systems that process inputs and outputs at all stages of the data pipeline should facilitate validation of system-supported user decisions and lower overall audit risk. However, if data inputs and outputs are processed in a black-box environment, confirming internal system operations may not be possible. That would increase the audit risk of drawing the wrong conclusion about the reasonableness of the system output.<br> </li><li>If performance metrics are used to measure the quality of the data output, user acceptance of system results, and system compliance with government regulations, then audit risk decreases.<br></li><li>If performance metrics monitor both system training and production data, then audit risk decreases.<br></li><li>If performance metrics measure system accuracy but not precision, overlooking a possible system performance issue, then audit risk increases.<br></li><li>Well-designed systems prevent unauthorized access to system data based on company protocols and regulatory requirements and routinely monitor access for security breaches, decreasing audit risk. </li></ul></td></tr></tbody></table><p>To test these controls, internal auditors should:</p><p></p><ul><li>Examine reported variances from established performance measures. </li><li>Test a representative sample of performance variances to confirm whether management's follow-up or corrective action was appropriate. </li><li>Determine whether such action has enhanced user acceptance of system results.</li></ul><p> <br> <strong>Data Sensitivity (Production)</strong> With this issue, the risk is unauthorized access to personally identifiable information or other sensitive data that violates regulatory requirements. Controls include ensuring documented procedures are in place that restrict system access to authorized users. Additionally, ongoing monitoring for compliance is needed. Control testing includes:</p><p></p><ul><li>Comparing system access logs to a documented list of authorized users.</li><li>Notifying management about audit exceptions.</li></ul><h2>Algorithmic Accountability</h2><p>As AI technology matures, algorithmic bias in AI systems and lack of consumer privacy have raised ethical concerns for business leaders, politicians, and regulators. Nearly one-third of CEO respondents ranked AI ethics risk as one of their top three AI concerns, according to Deloitte's 2018 State of AI and Intelligent Automation in Business Survey. </p><p>What's more, the U.S. Federal Trade Commission (FTC) addressed hidden bias in training datasets and algorithms and its effect on consumers in a 2016 report, Big Data: A Tool for Inclusion or Exclusion? Such bias could have unintended consequences on consumer access to credit, insurance, and employment, the report notes. A recent U.S. Senate bill, the Algorithmic Accountability Act of 2019, would direct the FTC to require large companies to audit their AI algorithms for bias and their datasets for privacy issues, as well as correct them. If enacted, this legislation would impact the way in which such systems are developed and validated. </p><p>Given these developments, the master audit plan of many organizations could go beyond rendering assurance on AI system integrity to evaluating compliance with new regulations. Internal auditors also may need to provide the ethical conscience to the business leaders responsible for detecting and eliminating AI system bias, much as they do for the governance of financial reporting controls. </p><p>These responsibilities may make it harder for internal audit to navigate the path to effective AI system auditing. Yet, those departments that embark on the journey may be rewarded by improved AI system integrity and enhanced professional responsibility. </p><p><em>To learn more about AI, read </em><a href="/2019/Pages/Getting-to-Know-Common-AI-Terms.aspx"><em>"Getting to Know Common AI Terms."</em></a><br></p>Dennis Applegate1

  • AuditBoard_Apr 2020_Premium 1
  • Fastpath_Apr 2020_Premium 2
  • IIA Membership Centers_Apr 2020_Premium 3