Getting to Know Common AI Terms to Know Common AI Terms<p>​Artificial intelligence (AI) systems development and operation involve terms and techniques that may be new to some internal auditors, or that contain meanings or applications that are different from their normal audit usage. Each of the terms below have a long history in the development and execution of AI processes. As such they can promote a common understanding of AI terms that can be applied to <a href="/2019/Pages/Framing-AI-Audits.aspx">auditing these systems</a>. </p><h2>Locked Datasets </h2><p>Datasets are difficult to create because independent judges should review their features and uses, and then validate them for correctness. These judgments drive the system in the training phase of system development, and if the data is not validated, the system may learn based on errors. </p><p>In machine learning systems, datasets are normally "locked," meaning data is not changed to fit the algorithm. Instead, the algorithm is changed based on the system predictions derived from the data. As a safety precaution, data scientists usually are barred from examining the datasets to determine the reasons for such changes. This prevents them from biasing the algorithm given their understanding of the data relationships.  </p><p>Consider a system that reviews the ZIP codes of business accounts. The system may fail to recognize ZIP codes beginning with "0," such as 01001 for Agawam, Mass., or that contain alphanumeric characters such as V6C1H2 for Vancouver, B.C. Locking the dataset prevents the data scientists from inspecting the errors directly. Instead, they would have to investigate why the system is interpreting some accounts differently than others and whether the algorithm contains a defect. Barring data scientists in this way is another form of locking the dataset.  </p><h2>Third-party Judges</h2><p>Because historical datasets are not always verified before AI system use, the internal auditor needs to ensure an appropriate validation process is in place to confirm data integrity. Use of automated systems to judge data integrity may mask AI issues that adversely affect the quality of the output.  </p><p>Therefore, a customary practice in the industry has been to use independent, third-party judges for validation purposes. The judges, however, must have sufficient expertise in the data domain of the system to render valid test results. If they use algorithms as part of their validation process, then those, too, must be validated independently. Usually any inconsistency in the test results during judging is reviewed and reconciled as part of the process. A well-designed validation process will help avoid user acceptance of system outcomes that are inherently flawed.</p><h2>Overfitting and Trimming</h2><p>The data scientist selects datasets to train the AI system that are intended to reflect the actual data domain. Sometimes those datasets reflect ambiguous conditions that should be trimmed or deleted to enhance the probability of error-free results. </p><p>For example, the first name "Pat" can apply to either gender. To avoid system confusion, the data scientist would likely trim it from the training dataset. However, the first name "Tracy," although historically applicable to both male and female, is more commonly a female name. Trimming "Tracy" from the training datasets might bias system outcomes toward males without eliminating much ambiguity when the production data is processed.  </p><p>The problem with trimming is that it can cause data overfit in an algorithm and biased system results during the production phase. Data overfit occurs when the training dataset is trimmed to derive a particular algorithm, rather than the algorithm adjusting itself to a training dataset that represents the actual data domain. The resulting algorithm is not based on a representative data domain. Internal audits should examine process controls over the training dataset to safeguard against data overfit caused by excessive data trimming designed to achieve a desired algorithmic outcome.   </p><h2>Outliers</h2><p>It is important for the data scientist to examine data outliers. For example, a machine learning system may be 90% accurate in correcting misspelled words, but it also may flag numbers as errors and correct them. Those corrections can cause havoc with critical documents, such as financial reports, if the data scientist failed to review system predictions for such outliers.</p><h2>Metrics</h2><p>Performance metrics should be used to assess AI system accuracy (How close are the predictions to the true values?) and precision (How consistent are the outcomes between system iterations?). Such metrics are a best practice, because they indicate performance issues in AI system operations, including:</p><ul><li>False positives: identifying an unacceptable item as correct.</li><li>False negatives: identifying an acceptable item as incorrect.</li><li>Missed items: not addressing all items in the population. </li></ul><p> <br> </p><p>A formal review process to cover these issues improves system performance and helps decrease audit risk.   </p><p>Putting in place accuracy and precision metrics is a best practice when evaluating AI systems. Although these metrics show how well a system finds issues, they do not tell the entire story. In addition, measurements to identify issues missed (false negatives), incorrect identification of issues (false positives), and where an issue exists in the data, but the system failed to detect the issue (missing issues), are needed to measure the full performance of a system.</p><h2>User Interpretation</h2><p>Internal auditors must be careful to safeguard the integrity of the AI audit from user misinterpretations of system outcomes. That is because the system may generate supportable conclusions that are simply misunderstood or ignored. </p><p>For instance, if a system were to predict that jungle fires are related to climate change, this does not confirm that climate change has caused the jungle fires. Earlier this year, news organizations reported that climate change caused fires in the Amazon jungle. However, <a href="" target="_blank">NASA had asserted</a> that the fires were the same as previous years with no change over time and with no relation to global warming. While there might be a correlation between the two, causation should not be inferred from the system prediction.  </p><p>Internal auditors need to take the human factor into account when assessing system quality.  System users may simply refuse to believe or act upon system predictions because of bias, personal preference, or preconceived notions. </p>Dennis Applegate1
Framing AI Audits AI Audits<p>​Artificial intelligence (AI) is transforming business operations in myriad ways, from helping companies set product prices to extending credit based on customer behavior. Although still in its nascent stage, organizations are using AI to rank money-laundering schemes by degree of risk based on the nature of the transaction, according to a July EY analytics article. Others are leveraging AI to predict employee expense abuse based on the expense type and vendors involved. Small wonder that McKinsey & Company estimates that the technology could add $13 trillion per year in economic output worldwide by 2030. </p><p>If AI is not on internal audit's risk assessment radar now, it will be soon. As AI transitions from experimental to operational, organizations will increasingly use it to predict outcomes supporting management decision-making. Internal audit departments will need to provide management assurance that the predicted outcomes are reasonable by assessing AI risks and testing system controls.</p><h2>Evolving Technology</h2><p>AI uses two types of technologies for predictive analytics — static systems and machine learning. Static systems are relatively straightforward to audit, because with each system iteration, the predicted outcome will be consistent based on the datasets processed and the algorithm involved. If an algorithm is designed to add a column of numbers, it remains the same regardless of the number of rows in the column. Internal auditors normally test static systems by comparing the expected result to the actual result. </p><p>By contrast, there is no such thing as an expected result in machine learning systems. Results are based on probability rather than absolute correctness. For example, the results of a Google search that float to the top of the list are those that are most often selected in prior searches, reflecting the most-clicked links but not necessarily the preferred choice. Because the prediction is based on millions of previous searches, the probability is high — though not necessarily certain — that one of those top links is an acceptable choice. </p><p>Unlike static systems, the Google algorithm, itself, may evolve, resulting in potentially different outcomes for the same question when asked at different intervals. In machine learning, the system "learns" what the best prediction should be, and that prediction will be used in the next system iteration to establish a new set of outcome probabilities. The very unpredictability of the system output increases audit risk absent effective controls over the validity of the prediction. For that reason, internal auditors should consider a range of issues, risks, controls, and tests when providing assurance for an AI business system that uses machine learning for its predictions.</p><h2>AI System Development</h2><p><img src="/2019/PublishingImages/Applegate_Three-Phases.jpg" class="ms-rtePosition-2" alt="" style="margin:5px;width:500px;height:251px;" />The proficiency and due professional care standards of the International Professional Practices Framework require internal auditors to understand AI concepts and terms, as well as the phases of development, when planning an AI audit (see "Three Phases of Development," right). Because data fuels these systems, auditors must understand AI approaches to data analysis, including their effect on the system algorithm and its precision in generating outcome probabilities.</p><p> <em>Features</em> define the kinds of data for a system that would generate the best outcome. If the system objective is to flag employee expense reports for review, the features selected would be those that help predict the highest payment risk. These could include the nature of the business expense, vendors and dollar amounts involved, day and time reported, employee position, prior transactions, management authorization, and budget impact. A data scientist with expertise in this business problem would set the confidence level and predictive values and then let the system learn which features best determine the expense reports to flag.</p><p> <em>Labels</em> represent data points that a system would use to name a past outcome. For instance, based on historical data, one of the labels for entertainment expenses might be "New York dinner theater on Saturday night." The system then would know such expenses were incurred for this purpose on that night in the past and would use this data point to predict likely expense reports that might require close review before payment. </p><p> <em>Feature engineering</em> delimits the features selected to a critical few. Rather than provide a correct solution to a given problem, such as which business expense reports contain errors or fraud, machine learning calculates the probability that a given outcome is correct. In this case, the system would calculate which expense reports are likely to contain the highest probability of errors or fraud based on the features selected. The system then would rank the outcomes in descending order of probability. </p><p> <em>Machine learning</em> involves merging selected features and outcome labels from diverse datasets to train a system to generate a model that will predict a relationship between a set of features and a given label. The resulting algorithm and model are then refined in the testing phase using additional datasets. This phase may consider hundreds of features at once to discover which features yield the highest outcome probability based on the assigned labels. </p><p>Feature engineering then deletes the number of system features to enhance the precision of the outcome probabilities. Based on the testing phase, for example, the nature of the expense, the dollar amounts involved, and the level of the employee's position may best indicate high-risk business expense reports requiring close review. During the production phase, as the system calculates the risk of errors and fraud in actual expense reports, it may modify the algorithm based on actual output probabilities to improve the accuracy of future predictions. Doing so would create continuous system learning not seen in static systems. </p><p>In AI system development, it is important for organizations to establish an effective control environment, including accountability for compliance with corporate policies. This environment also should comprise safeguards over user access to proprietary or sensitive data, and performance metrics to measure the quality of the system output and user acceptance of system results. </p><h2>A Risk/Control Audit Framework</h2><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p> <strong>Training Phase</strong></p><p>Considerations for adjusting the assessed level of AI audit risk include:</p><ul><li>If system reviews are in place to evaluate training data modifications, deletions, or trimming, this condition should help prevent overfitting the training dataset to generate a desired result, reducing audit risk.<br></li><li>New AI systems may use datasets of existing systems for reasons of time and cost. Such datasets, however, may contain bias and not include the kinds of data needed to generate the best system outcomes, increasing audit risk.<br> </li><li>AI datasets that consist of numerous data records should contain some errors. In fact, an error-free dataset would indicate a bad dataset, because the occurrence of errors should match the natural rate. For example, if 5% of employee expense reports are filled in incorrectly and are missing key data, then the training dataset should contain a similar frequency. If not, then audit risk increases. ​</li></ul></td></tr></tbody></table><p>Nine procedures frame the audit of an AI system during the training, testing, and production phases of development. The framework provides a point of departure for AI audit planning and execution. Assessed risk drives the controls expected and subsequent internal auditor testing. </p><p>Internal auditors may need to adjust the procedures based on their preliminary survey of the AI system under audit, including a documented understanding of the system development process and an analysis of the relevant system risks and controls. Moreover, as auditors complete and document more of these audits, it may be necessary to adjust the framework.</p><p>Normally, internal auditors adjust their assessment of risk and their resulting audit project plans based on observations made in the preliminary audit survey. The boxes, at right, depict conditions that may alter assessed risk as well as modify expected AI system controls and subsequent audit testing during specific phases of development. </p><p> <strong>Data Bias (Training Phase)</strong> Use of datasets that are not representative of the true population may create bias in the system predictions. Bias risk also can result from failing to provide appropriate examples for the system application.</p><p>A control for data bias is to establish a system review and approval process to ensure there are verifiable datasets and system probabilities that represent the actual data conditions expected over the life of the system. Audit tests of control include ensuring that: </p><p></p><ul><li>Qualified data scientists have judged the datasets.</li><li>The confidence level and predictive values are reasonable given the data domain.</li><li>Overfitting has not biased system predictions. </li></ul><p> <br> <strong>Data Recycling (Training)</strong> This risk can happen when developers recycle the wrong datasets for a new application, or impair the performance or maintenance of existing systems by using those datasets to create or update a new application.</p><p>One control for data bias is independently examining repurposed data for compliance with contractual or other requirements. In addition, organizations can determine whether adjustments in the repurposed data have been made without impacting other applications. </p><p>Examples of control tests are: </p><p></p><ul><li>Evaluating the nature, timing, and extent of the independent examinations.</li><li>Testing the records of other applications for performance or maintenance issues that stem from the mutually shared datasets.</li></ul><p> <br> <strong>Data Origin (Training)</strong> Unauthorized or inappropriately sourced datasets can increase the risk of irrelevant, inaccurate, or incomplete system predictions during the production phase.</p> <p></p><p>To control this risk, the organization should inspect datasets for origin and relevance, as well as compliance with contractual agreements, company protocols, or usage restrictions. The results of these inspections should be documented. </p><p>To test controls, auditors should:</p><p></p><ul><li>Review data source agreements to ensure use of datasets is consistent with contract terms and company policy.</li><li>Examine the quality of the inspection reports, focusing on the propriety of data trimmed from the datasets.<br><br></li></ul><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p> <strong>​</strong><strong>Testing Phase</strong></p><p>Considerations for adjusting the assessed level of AI audit risk include:</p><ul><li>If independent, third-party judges tested the system data, but no process is in place to reconcile differences in test results between judges, then audit risk increases. </li><li>Because system predictions are based on probability, perfect test results are not possible. If third-party judges evaluating the test results find no issues, then data overfit may have occurred, increasing audit risk. </li><li>If the system has not been validated to prevent user misinterpretations caused by incorrect data relationships, such as flagging business expense reports based on employee gender, then audit risk increases. Alternatively, if user interpretations based on system predictions have not been validated to ensure system data supports the interpretation, then audit risk also increases. </li><li>If data scientists fail to use representative datasets with examples involving critical scenarios to train the system, then audit risk increases. </li><li>If the datasets are not locked during testing, then the data scientist may adjust the algorithm to inadvertently process the data in a biased manner, increasing audit risk.</li><li>If the datasets are locked during testing, but the data scientist fails to review the actual system prediction for integrity, then audit risk increases. </li></ul></td></tr></tbody></table><p> <strong>Data Conclusion (Testing Phase)</strong> Inappropriately tested data relationships could result in improper system conclusions that are based on incorrect assumptions about the data. These conclusions could create bias in management decisions.</p><p>The control for this risk is to ensure each feature of the system contains data for which the purpose has been approved for use. Developers should assess the results of such data for misinterpretation and correct it, as appropriate. </p><p>Testing this control involves reviewing user interpretations and subsequent management decisions based on system predictions. By performing this test, organizations can ensure that the data supports the conclusions reached and decisions made by management.</p><p> <strong>Data Overfit (Testing)</strong> With this issue, the risk is that datasets may not reflect the actual data domain. Specifically, data outliers may have been trimmed during system testing, leading to a condition that overfits the algorithm to a biased dataset. That could cause the system to respond poorly during the production phase.</p><p>Organizations can control for this risk by validating datasets in system testing to ensure that the samples used represent all possible scenarios and that the datasets were modified appropriately to obtain the currently desired system outcome.</p><p>To test this control, internal auditors should review all outlier, rejected, or trimmed data to ensure that:</p><p></p><ul><li>Relevant data has not been trimmed from datasets.</li><li>Datasets remain locked throughout testing.</li><li>The algorithm has processed the data in an unbiased way.</li></ul><p> <br> <strong>Data Validation (Testing) </strong>Failure to validate datasets for integrity through automated systems or independent, third-party judges can lead to unsupported management decisions or regulatory violations. An example would be allowing the personal data of European Union (EU) citizens to be accessed outside of the EU in violation of Europe's General Data Protection Regulation.</p><p>Organizations can control for this risk by implementing a validation process that compares datasets to the underlying source data. If the organization uses automated systems, it should ensure the process reveals all underlying issues affecting the quality of the system output. If the organization uses independent, third-party judges, it should ensure the process allows judges the access they need to the raw data inputs and outputs.</p><p>To test these controls, internal auditors should:</p><p></p><ul><li>Assess the process and conditions under which the validation took place, assuring that all high-risk datasets used in the system were validated.</li><li>Confirm randomly selected datasets with underlying source data.</li><li>When datasets are based on current system data, validate such data is correct to avert a flawed assessment of actual system data.</li></ul><p> <br> <strong>Data Processing (Production Phase)</strong> Failing to validate internal systems processing can cause inconsistent, incomplete, or incorrect reporting output and user decisions. However, periodically reviewing and validating input and output data at critical points in the data pipeline can mitigate this risk and ensure processing is in accordance with the system design. </p><p>Auditors can test this control by:</p><ul><li>Reconstructing selected data output from the same data input to validate system outcomes.</li><li>Performing the system operation again.</li><li>Using the results to reassess system risk.</li></ul><p> <br> <strong>Data Performance (Production)</strong> If there is a lack of performance metrics to assess the quality of system output, the organization will fail to detect issues that diminish user acceptance of system results. For example, an AI system could fail to address government tax or environmental regulations over business activity. </p><p>Controlling data performance risk requires organizations to establish metrics to evaluate system performance in both the training and production phases. Such metrics should include the nature and extent of false positives, false negatives, and missed items. In addition, developers should implement a feedback loop for users to report system errors directly, among other performance measures. </p><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p>​<strong>Production Phase</strong></p><p>Considerations for adjusting the assessed level of AI audit risk include:</p><ul><li>Systems that leverage the datasets of existing systems already audited should lower overall audit risk and not require as much audit testing as new systems using datasets not previously audited.<br></li><li>Systems that process inputs and outputs at all stages of the data pipeline should facilitate validation of system-supported user decisions and lower overall audit risk. However, if data inputs and outputs are processed in a black-box environment, confirming internal system operations may not be possible. That would increase the audit risk of drawing the wrong conclusion about the reasonableness of the system output.<br> </li><li>If performance metrics are used to measure the quality of the data output, user acceptance of system results, and system compliance with government regulations, then audit risk decreases.<br></li><li>If performance metrics monitor both system training and production data, then audit risk decreases.<br></li><li>If performance metrics measure system accuracy but not precision, overlooking a possible system performance issue, then audit risk increases.<br></li><li>Well-designed systems prevent unauthorized access to system data based on company protocols and regulatory requirements and routinely monitor access for security breaches, decreasing audit risk. </li></ul></td></tr></tbody></table><p>To test these controls, internal auditors should:</p><p></p><ul><li>Examine reported variances from established performance measures. </li><li>Test a representative sample of performance variances to confirm whether management's follow-up or corrective action was appropriate. </li><li>Determine whether such action has enhanced user acceptance of system results.</li></ul><p> <br> <strong>Data Sensitivity (Production)</strong> With this issue, the risk is unauthorized access to personally identifiable information or other sensitive data that violates regulatory requirements. Controls include ensuring documented procedures are in place that restrict system access to authorized users. Additionally, ongoing monitoring for compliance is needed. Control testing includes:</p><p></p><ul><li>Comparing system access logs to a documented list of authorized users.</li><li>Notifying management about audit exceptions.</li></ul><h2>Algorithmic Accountability</h2><p>As AI technology matures, algorithmic bias in AI systems and lack of consumer privacy have raised ethical concerns for business leaders, politicians, and regulators. Nearly one-third of CEO respondents ranked AI ethics risk as one of their top three AI concerns, according to Deloitte's 2018 State of AI and Intelligent Automation in Business Survey. </p><p>What's more, the U.S. Federal Trade Commission (FTC) addressed hidden bias in training datasets and algorithms and its effect on consumers in a 2016 report, Big Data: A Tool for Inclusion or Exclusion? Such bias could have unintended consequences on consumer access to credit, insurance, and employment, the report notes. A recent U.S. Senate bill, the Algorithmic Accountability Act of 2019, would direct the FTC to require large companies to audit their AI algorithms for bias and their datasets for privacy issues, as well as correct them. If enacted, this legislation would impact the way in which such systems are developed and validated. </p><p>Given these developments, the master audit plan of many organizations could go beyond rendering assurance on AI system integrity to evaluating compliance with new regulations. Internal auditors also may need to provide the ethical conscience to the business leaders responsible for detecting and eliminating AI system bias, much as they do for the governance of financial reporting controls. </p><p>These responsibilities may make it harder for internal audit to navigate the path to effective AI system auditing. Yet, those departments that embark on the journey may be rewarded by improved AI system integrity and enhanced professional responsibility. </p><p><em>To learn more about AI, read </em><a href="/2019/Pages/Getting-to-Know-Common-AI-Terms.aspx"><em>"Getting to Know Common AI Terms."</em></a><br></p>Dennis Applegate1
Editor's Note: On Pace With Technology's Note: On Pace With Technology<p>​The accelerating pace of technology advancements is creating significant disruption within organizations — and it appears internal auditors may not be keeping pace. A new report from The IIA, <a href="" target="_blank">OnRisk 2020: A Guide to Understanding, Aligning, and Optimizing Risk</a>, reveals that one risk area in which internal audit may be falling behind is data and new technology. </p><p>According to OnRisk, only 17% of internal auditors consider themselves knowledgeable about data and new technology, lower than the 42% of board members and 26% of those from the C-suite who consider themselves the same. For auditors to be taken seriously in the boardroom, they must address these knowledge gaps. </p><p>The OnRisk report recommends that chief audit executives "dedicate resources to better understanding how the organization is leveraging data and technology in new ways." Internal audit should be able to provide assurance on the impact of data and new technology on the "collection, management, and protection of data," the report says.</p><p>To do that, internal auditors need to ensure they're educating themselves in these areas. In that vein, auditors may want to read "Framing AI Audits" (page 29), which takes an in-depth look at internal audit's role in assessing artificial intelligence risks and testing system controls. Also in this issue, "Bots of Assurance" (page 42) considers how audit functions can catch up with their organizations' use of robotic process automation by deploying bots to enhance their assurance capabilities. Finally, readers may want to check out the first of a three-part series of reports coming from Deloitte and the Internal Audit Foundation on new technologies, <a href="" target="_blank">Moving Internal Audit Deeper Into the Digital Age: Part 1</a>. </p><p>According to OnRisk, as risks around data and new technology grow in relevance over the next five years, risk management players need to build knowledge in this area. Internal audit professionals who take their fate into their own hands and improve their tech knowledge will likely find themselves in high demand, as OnRisk also notes organizations are struggling to attract and retain talent with data and IT skills. </p><p>Finally, with this issue we say goodbye to our designer, Joe Yacinski. I have worked with Joe since I joined The IIA in late 2000, and I will greatly miss our collaborations. His thoughtful and creative approaches to the many challenging articles we've brought him over the years — how does one illustrate internal control? — have resulted in the magazine receiving numerous accolades. Joe's contributions have helped make the magazine the professional publication it is today. Joe, thank you, and we wish you well.</p>Anne Millage0
The Analytics Journey Analytics Journey<p>​If someone asked you if your internal audit department has an analytics program, would you hesitate and answer something like: "We hired an analytics person," "We bought a tool," or "We have a few tests running"? These are a team member, a tool, even some results, but none of them is a program. They are no more of a program than having someone responsible for quality would indicate that the department has a quality program.</p><p>People, tools, and results are elements of a program; they show evidence that <em>something</em> is there. But at its core, a program is a new work function, and a work function is defined by whether you know what you are doing, and why and how you do it. When internal auditors understand the function, they can explain it to others, get their support, and if needed, divide work effectively. These are the hallmarks of an analytics program.</p><h2>Five Elements</h2><p>To explain analytics programs, it is useful to break them into five elements:</p><ol><li> <strong>Program approach/fit</strong> — Why internal audit wants/has the program, and how the program fits with the mission of the organization and internal audit, in particular. </li><li> <strong>Test development process</strong> — The workflow, tools, and templates auditors use to approach these projects, get support, and track and obtain consistent results.</li><li> <strong>Development Roadmap</strong> — What internal audit will tackle first, what it will tackle next, and when it should re-evaluate the pipeline order. Also, why did the audit function choose this order?</li><li> <strong>Analytic tools and techniques</strong> — These tools and techniques start with access to data sources, and extend to analysis and communication approaches needed to process and explore the data to get the answers auditors seek. Beyond a tool, internal audit needs a toolbox.</li><li> <strong>Key Contacts</strong> — Members of the audit team and audit clients, as well as stakeholders who have interest in the results and understand the process. These contacts give auditors access to data and advice about how to interpret it. These individuals also can help with research and following up on findings. Key contacts will likely change with each new project under the program.</li></ol><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;">​ <p> <strong>Further Reading </strong></p><p>Benchmarking helps ideas flourish into new, and more powerful, concepts. When it comes to analytics programs, there are some people who know what they are talking about. Internal auditors should look at what they have to say:</p><ul><li>Manuel Coello, ACL, <a href="" target="_blank"><span class="ms-rteForeColor-10"><span>Top 100 Test</span><span>s</span></span></a> (PDF) </li><li>Deloitte, <span class="ms-rteForeColor-10"> </span> <a href="" target="_blank"> <span class="ms-rteForeColor-10"> <span> </span> <span>Continuous Monitoring and Continuous Auditing: From Idea to Implementation</span></span></a> (PDF)</li><li>Brent Dykes, <span class="ms-rteForeColor-10"> </span> <a href="" target="_blank"> <span class="ms-rteForeColor-10"> <span> </span> <span>The Two Guiding Principles for Data Quality in Digital Analytics</span></span></a></li><li>Daniel Haight, <a href="" target="_blank"><span class="ms-rteForeColor-10">The Five Faces of the Analytics Dream Team</span></a> </li><li>The IIA, <a href="" target="_blank"><span class="ms-rteForeColor-10">Data Analytics Resource Exchange</span></a></li><li>Wolters Kluwer (TeamMate), <a href="" target="_blank"><span class="ms-rteForeColor-10"> <span>Audit Technology Insights: Technology Champions, Key Strategic Enabler</span>s</span></a> (registration required) </li></ul></td></tr></tbody></table><p>Together these five elements articulate what internal audit is doing with analytics, why it does this, how it does this, and who is helping auditors make it happen. The elements enable auditors to talk about the program and rally support for it. They also define success and help internal audit track progress towards it. For example, internal audit could report that:</p><p>"The analytics program in our organization is embedded in the fraud and forensics unit of internal audit, where we use it to develop detective controls to support fraud-fighting efforts. We use Alteryx and Microsoft Power BI to combine simple red-flags tests to risk-rank transactions for human review. The red flags and their interpretation are developed in collaboration with the data owner business unit and/or benchmark with other audit shops. So far, we have tackled payroll and the procurement-to-payment cycle, and now we are scoping tests in travel and entertainment."</p><p>Note that although this is a data analytics program, access to data and business understanding are not called out as independent program elements. Instead, they are embedded in internal audit's network of key contacts. They are key elements of a project, but are not elements of the program that houses those projects.</p><h2>Is the Program Working?</h2><p>Internal audit has a successful analytics program when the department can transition it among team members without having to start again from scratch. This handover is possible because team members know what is done, why it is done, and in general terms, how to replicate it to address new problems. When internal audit reaches this stage of development, it will get extra credit if team members can recite the vision and explain what should be the next project on the horizon.</p><p>Analytics programs establish the support needed to achieve its aims: from tools and techniques, to relationships that will bring ideas on what to test and access to the data needed to do it. These relationships can support understanding of what the data is saying, and help in resolving the issues the program reveals. In the end, an analytics program is in place when it becomes "the way we do this here," and is no longer dependent on one key person to keep it going. </p>Francisco Aristiguieta0
People-centric Innovation Innovation<p>​People seem to get lost in discussions about digital transformation, yet they are at the center of today's business innovations. Technology, customization, and sustainability are the three main drivers changing consumer behavior, according to a Euromonitor International report. Meanwhile, Gartner analysts say tomorrow's technology will be people-centric.</p><p>The ubiquity of smartphones has made technology indispensable in everyday life, Euromonitor's <a href="" target="_blank">2019 Megatrends: State of Play</a> notes. Customization builds on technologies such as artificial intelligence (AI) and 5G to reinvent how people shop. Sustainability reflects growing consumer activism about environmental impacts, the London-based research firm says. </p><p>"The change in consumer demands will contribute toward a rise in investments amongst emerging economies, driving businesses to develop innovative strategies to meet those demands," says Alison Angus, head of lifestyles at Euromonitor.</p><h2>More Than Meets the Eye</h2><p>Euromonitor predicts eight changes in consumer behavior will cause the greatest disruption across industries. By analyzing these megatrends, organizations can build long-term strategies to remain relevant, the report notes.</p><p> <strong>Connected Consumers</strong> Smartphones and other connected devices have given consumers multiple ways to interact with digital content and companies, but they also have created greater dependency on those devices. In this time-pressed environment, organizations will need to ensure that consumer interactions provide value, the report advises.</p><p> <strong>Healthy Living</strong> Consumers have a holistic interest in physical, mental, and spiritual wellness. The report points to growing interest this year in health-related technology services such as genetic testing and personalized nutrition analysis.</p><p> <strong>Ethical Living</strong> Almost three in 10 consumers say purchasing eco- or ethically conscious products makes them feel good, according to Euromonitor's 2019 Lifestyles Survey. Euromonitor predicts concern about the environment, sustainability, and labor practices will be one of the most significant market disruptors in the coming years.</p><p> <strong>Shopping Reinvented</strong> Connectivity gives consumers more information about products and services, so those organizations need to be able to engage with them "anytime and anywhere," the report says. These consumers demand more personalization, budget-friendly experiences, and greater convenience.</p><p> <strong>Experience More</strong> Euromonitor points out that consumers are seeking experiences more than possessions. This is pushing businesses to emphasize experiences, experiment with marketing strategies, and become more consumer-centric.</p><p> <strong>Middle Class Retreat</strong> The middle class in developed nations is becoming more budget-conscious and selective about purchases, the report finds. Yet because the middle class remains vital to the consumer market, this megatrend may impact other megatrends.</p><p> <strong>Premiumization</strong> Consumers will spend more on the products and services that matter most to them, Euromonitor says. They are seeking more personalized and convenient service, and products that appeal to wellness and happiness.</p><p> <strong>Shifting Market Frontiers</strong> Euromonitor says economic power is shifting to fast-growing economies in Asia and Africa. Businesses investing in those regions will need strategies that are "sensitive to the environment and local communities."</p><h2>Building Smart Spaces</h2><p>Technology trends are revolving around people, too. At this month's Gartner IT Symposium in Orlando, <a href="" target="_blank">Gartner analysts identified 10 strategic technology trends</a> that may reach "a tipping point" within the next five years. All are based on the concept of creating "smart spaces" in which people and technology interact. </p><p>"Multiple elements — including people, processes, services, and things — come together in a smart space to create a more immersive, interactive, and automated experience," says David Cearly, a vice president at Gartner.</p><p> <strong>Hyperautomation</strong> Although this trend began with robotic process automation, it now involves combinations of machine learning and automation tools. In addition to tools, organizations need to understand all the steps of automation and how automation mechanisms can be coordinated, Gartner says.</p><p> <strong>Multiexperience</strong> Garter predicts technologies such as conversational platforms, virtual reality, and augmented reality will shift the user experience by 2028. This will enable businesses to provide a multisensory experience for delivering "nuanced information."</p><p> <strong>Democratization of Expertise</strong> This trend is about giving people access to technical and business expertise without extensive training. For example, Gartner expects that data analytics, application development, and design tools will become more useful for people who don't have specialized knowledge. </p><p> <strong>Human Augmentation</strong> Once the stuff of science fiction, Gartner predicts the use of technology to augment physical and cognitive abilities will grow over the next 10 years. An example would be a wearable device that provides access to information and computer applications.</p><p> <strong>Transparency and Traceability</strong> Privacy regulations and emphasis on protecting personal data have made transparency and traceability key elements of data ethics. In building these capabilities, Gartner says organizations should focus on AI, ownership and control of personal data, and ethically aligned design.</p><p> <strong>The Empowered Edge</strong> Edge computing puts information processing, content collection, and delivery closer to the sources and consumers of that information. Although currently focused on the Internet of Things, Gartner expects edge computing to expand to applications such as robots and operational systems. </p><p> <strong>Distributed Cloud </strong>This technology distributes public cloud services to other locations, in contrast to the current centralized cloud model. </p><p> <strong>Autonomous Things</strong> These physical devices use AI to automate functions once performed by people. The concept is to develop technologies that interact "more naturally with their surroundings and with people," Gartner says.</p><p> <strong>Practical Blockchain</strong> New blockchain applications such as asset tracking and identity management can provide more transparency, lower costs, and settle transactions more quickly, Gartner says. </p><p> <strong>AI Security</strong> Increased use of AI for decision-making raises new security challenges such as increasing points of attack. Gartner advises focusing on protecting AI-powered systems, using AI to enhance defenses, and looking out for AI-based attacks.</p>Tim McCollum0
Top Challenges of Automating Audit Challenges of Automating Audit<p>Organizations are rapidly adopting technologies such as cloud computing, robotic process automation (RPA), machine learning, blockchain, and cognitive computing to create tomorrow’s business in today’s market. Internal audit needs to transform its processes to keep pace with these changes, and IT audit processes are an excellent place to start this transformation.</p><p>Organizations that still perform most internal audit tasks manually complicate IT governance. In this manual model, auditors have adopted many compliance laws, policies, procedures, guidelines, and standards, along with their related control objectives. Moreover, internal audit manages audit process elements such as training, standards, risk, planning, documentation, interviews, and findings separately. </p><p>An automated internal audit process can enable the audit function to link, consolidate, and integrate the planning, performance, and response steps of the audit process into a holistic approach. The process should present audit recommendations in a way that is dynamically sustainable within the organization’s integrated action plans. </p><p>Since 2012, many standards and frameworks have changed their models, procedures, and guidelines to elaborate on the role of the IT governance process. Accordingly, internal audit should redesign its proces-ses to coincide with new, streamlined IT processes and related roles. Meanwhile, IT audit specialists should understand the interoperability among the conceptual models of IT management, governance, standards, events, audits, and planning.</p><p>Transforming audit processes comes with challenges, though. Each of these challenges can be encapsulated in a pattern of a problem and a solution, which internal audit can prioritize based on its stakeholders’ needs.</p><h2>1. Syncing the IT Audit Process With IT Project Planning </h2><p><strong>Problem:</strong> IT audit teams need a way to link, tailor, and update audit findings and recommendations for ongoing IT projects and action plans. This will be necessary for auditors to follow up on findings and identify who is responsible for carrying out audit recommendations. </p><p><strong>Solution:</strong> An automated IT audit system would break IT audit work into two levels — findings’ recommendations and their final conditions — encompassing all preventive, detective, and corrective controls. The recommended actions reported in audit findings should be linked, integrated, and synced by their related IT project’s nondisclosure agreements, service-level agreements, and contracts. Then, the automated IT audit system should confirm that management addressed the recommendation.</p><h2>2. Letting IT Governance Direct the IT Audit Process</h2><p><strong>Problem:</strong> The role of the IT audit team in corporate governance is important because the function can help bridge the gap between the business and IT in organizations. IT governance is a key part of corporate governance, which directs and monitors the finance, quality, operations, and IT functions. Three of these functions — finance, quality, and operations — are being transformed by innovative, technology-based processes. Thus, the problems are how the board and executives will design and implement a corporate governance system and how the IT governance process will be automated. </p><p><strong>Solution:</strong> Automating the IT governance process should be comprehensive and agile. In other words, the IT governance, risk, and control mapping and cascading of goals and indicators among all levels of the organization must be user-friendly. To have an agile audit function, though, these maps and cascades should be tailored based on the types of governance roles such as the board, executives, internal auditors, chief information officer, and IT manager. </p><p>The internal audit function also should map key performance indicators based on the control objectives of various regulations, standards, and frameworks into its goals. These governance requirements include frameworks from The Committee of Sponsoring Organizations of the Treadway Commission and the U.S. National Institute of Standards and Technology (NIST), industry requirements such as the Payment Card Industry Data Security Standard, and regulations such as the European Union’s General Data Protection Regulation and the U.S. Sarbanes-Oxley Act of 2002.</p><h2>3. Transforming IT Audit Processes to a DevOps Review</h2><p><strong>Problem:</strong> Nowadays, some nonfunctional requirements such as cybersecurity, machine learning, and blockchain are being inherently changed to functional requirements. This change will have fundamental effects on the IT audit process. For example, IT auditors will need to assess cybersecurity or blockchain requirements during the organization’s system development operations (DevOps) process and change their audit program schedule to fit the DevOps schedule. This change can be a real challenge, especially for small and medium-sized audit teams that lack skills and experience with DevOps. </p><p><strong>Solution:</strong> Internal audit could solve this problem by moving to an “IT audit as an embedded DevOps review service” model. As a result, the review processes for IT governance, risk, and controls must be embedded into the DevOps life cycle. As part of this process, an automated system may provide access to metadata. For example, an auditor could set up a software robot to collect evidence about risks related to vendor lock-in, changes in vendors, and data conversion. Similarly, gathering cloud provider metadata through RPA can enable internal audit to respond to other cloud-based risks.</p><p>Generally the business model must be clear, well-defined, mature, and well-documented when any kind of business, especially IT audit, wants to migrate to the cloud. The IT audit process also will be streamlining and maturing in the cloud as a system. Thus, the cloud and robotic process automation can bring an iterative business model in which the IT audit process is transformed into a cognitive computing system. This system could result in more affordable audit costs and enable IT auditors to perform more engagements each year based on automated best practices.</p><h2>4. Mitigating IT Standards’ Side Effects </h2><p><strong>Problem:</strong> Applying some IT standards is analogous to a drug interfering with other drugs and having adverse effects on a body. Without a unified medicine solution, a prescription may not provide the greatest benefits and the fewest negative effects. Likewise, internal audit should ensure the side effects of IT standards do not cause problems such as increasing compliance costs. Auditors must address two issues:</p><p>Deciding which sections of IT-related standards such as COBIT 5, ISO 2700, and NIST Special Publication 800-30 best conform with the organization’s risk management framework.</p><p>Addressing conflicts and duplications among the various standards that might result in duplicate control objectives.</p><p><strong>Solution:</strong> An automated IT audit system should use machine learning and recommendation systems to remove the similar or contradictory control objectives of IT standards. This way, the audit system can control the duplications among all of the standards’ segments and use artificial intelligence to recommend an efficient and customized set of controls. </p><h2>Transforming the Auditor</h2><p>For automation to overcome these challenges, internal auditors must transform themselves, as well. This is an area in which IT audit specialists can help organizations find, prioritize, and invest in the right innovations to automate IT, internal audit, and cybersecurity processes. Moreover, by identifying ways to automate IT governance, risk, and controls, internal audit can help the IT function align its operations with the organization’s governance and transformation processes.  <br></p>Seyyed Mohsen Hashemi1
Cybersecurity's Key Ally's Key Ally<h2>What is the relationship between IT and internal audit in cybersecurity and preparedness? </h2><p>IT is essentially the assets that cybersecurity is supposed to be protecting. Internal audit should ensure technical and nontechnical controls are in place and operating effectively. Internal audit personnel must become intimately familiar with cybersecurity and how to test the effectiveness of cyber controls. Too often, internal auditors are not technical enough to assess whether an organization’s cyber controls are adequate to protect the assets they were put in place to protect. <br></p><p>Internal audit’s most valuable role is ensuring cybersecurity functions have the resources necessary to protect the organization effectively. Whether those resources be funding, staffing, or data from the organization’s IT systems, internal audit should be a strong advocate for the cybersecurity function by raising awareness around the organization’s cybersecurity needs. Internal audit assessment results should be a major topic in C-suite briefings to ensure the cybersecurity function receives the support necessary to protect the organization.<br></p><h2>How can an organization ensure its employees do not become part of a social engineering attack?</h2><p>Employees should be trained to identify and avoid becoming a victim of social engineering as part of an effective cyber education and awareness program, where frequent simulation exercises are a core component. The results of these exercises should be communicated across the organization, and C-suite executives should be kept up to speed on how the various areas of the company score on these exercises. Some organizations have begun to factor the results into employee performance reviews. For example, if an employee continuously fails phishing tests, that employee may be subjected to extra training, or his or her yearly performance rating might be impacted. Regardless of the consequences, C-level support is critical to raising awareness of social engineering among employees. </p>Staff0
The Business of Ethical AI Business of Ethical AI<p>For artificial intelligence (AI) to reach its potential, people must be able to trust it, according to Angel Gurría, secretary general of the Organisation for Economic Co-operation and Development (OECD). Speaking this month at the <a href="" target="_blank">London Business School</a>, Gurría noted that human-centered AI could increase productivity and foster "inclusive human progress." In the wrong hands, though, it could be misused. </p><p>"Ethics is the starting point," he said, "that divides what you should do and what you should not do with this kind of knowledge and information and technology."</p><p>The OECD is among several organizations and government bodies that are raising questions and issuing proposals about how AI can be implemented ethically. For most of these organizations, the key word is <em>trust</em>. But calls for ethical AI may be falling on deaf ears among businesses.</p><h2>Businesses Aren't Worried</h2><p>In a survey of more than 5,300 employers and employees in six countries, nearly two-thirds of employers say their organization would be using AI by 2022. However, 54% of employers say they aren't concerned that the organization could use AI unethically, according to the study by Genesys, a customer-experience company in San Francisco. Similarly, 52% aren't worried that employees would misuse AI.</p><p>Just over one-fourth of these employers are concerned about future liability for "unforeseen use of AI," the company notes in a press release. Currently, only 23% of employers have a written policy for using AI and robots ethically. Among employers that lack a policy, 40% say their organization should have one.</p><p>"We advise companies to develop and document their policies on AI sooner rather than later," says Merijn te Booij, chief marketing officer at Genesys. Those organizations should include employees in the process, te Booij advises, "to quell any apprehension and promote an environment of trust and transparency."</p><p>That word again: trust.</p><p>"Trust is still foundational to business," writes Iain Brown, head of Data Science at SAS UK and Ireland, this month on <a href="" target="_blank"> <em>TechRadar</em></a>. Brown says one-fourth of consumers will act if they think an organization doesn't respect or protect their data. </p><p>Despite laws such as the European Union's (EU's) General Data Protection Regulation, consumers may expect greater transparency than current regulations stipulate — particularly where "data meets AI," Brown says. He advises asking three questions to determine whether the organization is using AI ethically:</p><ul><li>Do you know what the AI is doing?</li><li>Can you explain it to customers?</li><li>Would customers respond happily when you tell them?</li></ul><h2>Governments Propose Guidelines</h2><p>Building ethical, trustworthy AI is at the core of the several plans, guidelines, and research initiatives sponsored by governments and nongovernmental organizations. In April, the European Commission issued <a href="" target="_blank">Ethics Guidelines for Trustworthy Artificial Intelligence</a> based on the idea that AI should be lawful, ethical, and robust. The OECD followed that in May by releasing <a href="" target="_blank">principles for responsible stewardship of trustworthy AI</a>.</p><p>The European Commission guidelines set out seven requirements for trustworthy AI: </p><ul><li>AI should empower people and have appropriate oversight. </li><li>AI systems should be resilient and secure.</li><li>AI should protect privacy and data and be under adequate data governance.</li><li>Data, system, and AI business models should be transparent.</li><li>AI should avoid unfair bias.</li><li>AI should be sustainable and environmentally friendly.</li><li>Mechanisms should be in place — including auditability — to ensure responsibility and accountability over AI.</li></ul><p> <br> </p><p>The European Commission recently launched a <a href="" target="_blank">pilot test of its guidelines</a>. It includes an online survey — open until Dec. 1 — and interviews with select public- and private-sector organizations. </p><p>Another aspect of the pilot phase are recommendations for EU and national policy-makers from the European Commission's High-Level Expert Group on Artificial Intelligence. AI that respects privacy, provides transparency, and prevents discrimination "can become a real competitive advantage for European businesses and society as a whole," says Mariya Gabriel, European Commissioner for Digital Economy and Society.</p><p>Additionally, France, Germany, and Japan have raised $8.2 million to fund research into human-centered AI, <a href=""> <em>Inside Higher Ed</em> reports</a>. The research would focus on the democratization of AI, the integrity of data, and AI ethics. </p><p>Meanwhile in the U.S., the National Institute for Standards and Technology (NIST) has released a plan aimed at developing AI-related technical standards and tools. Such standards are needed to promote innovation as well as public trust in AI technologies, NIST says. </p><p>To those ends, <a href="" target="_blank">U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools</a> (PDF) recommends bolstering AI standards-related knowledge and coordination among federal government agencies. It also calls for promoting research into how trustworthiness can be incorporated into AI standards and tools. Moreover, it advocates using public-private partnerships to develop standards and tools, and working with international parties to advance them.</p><h2>Trust With a Capital "T"</h2><p>The varying strands of AI ethics development are in the early stages, though. Meanwhile, the technology is advancing well ahead of any standards. In his speech in London, OECD's Gurría said AI can benefit society if people have the tools and the tools can be trusted. "Artificial intelligence can help us if we apply it well," he said. </p>Tim McCollum0
Transforming Assurance Assurance<p>​The IIA's Core Principles for the Professional Practice of Internal Auditing use the term <em>risk-based assurance</em> instead of <em>reasonable assurance</em>, which implies that there are different levels of assurance based on multiple risk factors. That creates an opportunity for internal audit to move its work to a higher level by delivering enhanced assurance to the board and management. </p><p>Enhanced assurance does not imply reductions in risk. Instead, it refers to asking better questions about the risks that matter as well as the risks that should be automated for greater efficiency. It's about developing assurance at scale to cover the breadth of operations and strategic initiatives efficiently and cost-effectively.</p><p>Computerized fraud detection is one example of delivering assurance at scale. In 2002, WorldCom internal auditor Gene Morse discovered a $500 million debit in a property, plant, and equipment account by searching a custom data warehouse he had developed. Morse's mining of the company's financial reporting system ultimately uncovered a $1.7 billion capitalized line cost entry made in 2001, according to the <em>Journal of Accountancy</em>. </p><p>This example illustrates how fraud or intentional errors can occur in limited transactions with catastrophic outcomes. Enhanced assurance techniques such as data mining can uncover these transactions, which traditional audit techniques such as discovery, stratification, and random sampling may miss. Today's technologies can enable internal audit functions to automate their operations and provide enhanced assurance, but to do so, they must reframe their strategy. </p><h2>Better Teams</h2><p>Data analytics and audit automation platforms provide internal auditors with the means to build assurance at scale whether a novice or expert. The technologies also create the opportunity to form better teams. </p><p>Small, focused teams are more productive than large, consensus-driven teams directed from the top down, author Jacob Morgan notes. Writing in <em>Forbes</em>, Morgan cites Amazon CEO Jeff Bezos' "two-pizza" rule: "If a team cannot be fed by two pizzas, then that team is too large." Morgan says having more people on the team increases the communication needed and bureaucracy, which can slow the team down.</p><p>Collaboration with automation can modernize the performance of small teams. Intelligent automation can integrate oversight into operations, reduce human error, improve internal controls, and create situational awareness where risks need to be managed. Automation-enabled collaboration can help reduce redundancies in demands on IT departments, as well. However, efficiency transformations often fail when projects underestimate the impact of change on people. </p><h2>The Human Element</h2><p>Many of the biggest assurance risks are related to people, but too often the weakest link is related to auditing human behavior. The 2018 IBM X-Force Threat Intelligence Index finds "a historic 424% jump in breaches related to misconfigured cloud infrastructure, largely due to human error." IBM's report assumes decisions, big or small, contribute to risks. However, the vulnerabilities in human behavior and the intersection of technology represent a growing body of risks to be addressed. </p><p>Separate studies from IBM, the International Risk Management Institute, and the U.S. Department of Defense find that human error is a key contributor to operational risk across industry type and represents friction in organizational performance. The good news is automation creates an opportunity to reduce human error and to improve insights into operational performance. Chief audit executives (CAEs) can collaborate with the compliance, finance, operations, and risk management functions to develop automation that supports each of these key assurance providers and stakeholders. </p><h2>The Role of Technology</h2><p>Technology enables enhanced assurance by leveraging analy-tics to ask and answer complex questions about risk. Analytics is the key to finding new insights hidden within troves of unexplored data in enterprise resource planning systems, confidential databases, and operations. </p><p>Technology solutions that improve situational awareness in audit assurance are ideally the end goal. Situational awareness in auditing is not a one-size-fits-all approach. In some organizations, situational awareness involves improved data analysis; in others, it may include a range of continuous monitoring and reporting in near real-time. </p><p>Intelligent automation addresses issues with audit efficiency and quality. First, auditors spend, on average, half their time on routine processes that could be automated, improving consistency of data and reductions in error rates. Data governance allows other oversight groups to leverage internal audit's work, reducing redundancy of effort. </p><p>Second, smart automation leads to business intelligence. As more key processes are automated, they provide insights into changing conditions that may have been overlooked using periodic sampling techniques at points in time. </p><p>Most events are high frequency but low impact, yet auditors, IT staff, and risk and compliance professionals spend the bulk of their time chasing down these events. That leaves little time for them to focus on the real threats to the organization. Automation works best at solving high frequency events that are routine and add little value in terms of new information on known risks. Instead of focusing on the shape of risk, auditors will be able to drill down into the data to understand specific causes of risk.</p><h2>Steps to Enhanced Assurance</h2><p>Before buying automation, CAEs should answer three questions: How will automation improve audit assurance? How will automation make processes more efficient? How will auditors use it to improve audit judgment?</p><p>The CAE should consider automation an opportunity to raise awareness with the board and senior executives about enhanced assurance and better risk governance. To do so, internal audit must align enhanced assurance with the strategic objectives of senior executives. </p><p>To implement enhanced assurance in the internal audit function, CAEs should follow three steps:</p><p></p><ul><li>Identify the greatest opportunities to automate routine audit processes.</li><li>Prioritize automation projects during each budget cycle in coordination with the operations, risk management, IT, and compliance functions. </li><li>Consider the questions most important to senior executives: Which risks pose the greatest threat to the organization's goals? How well do we understand risk uncertainties across the organization? Do existing controls address the risks that really matter?</li></ul><h2>Assurance and Transformation</h2><p>The World Economic Forum calls today's digital transformation the fourth Industrial Revolution and forecasts that it could generate $100 trillion for business and society by 2025. Every business revolution has been disruptive, and this one will be no exception. The difference in outcomes will depend largely on how well organizations respond to change.</p><p>Forward-looking internal audit departments already are delivering enhanced assurance by strategically focusing on the roles people, technology, and automation play in creating higher confidence in assurance. Other audit functions are in the early stage of transformation. Although these audit functions will make mistakes along the way, now is the time for them to build new data analysis and data mining skills, and to learn the strengths and weaknesses of automation. As these tools become more powerful and easy to use, enhanced assurance will set a new high bar in risk governance. </p>James Bone1
Stronger Assurance Through Machine Learning Assurance Through Machine Learning<p>​By now, most internal audit functions have likely implemented rule-based analytics capabilities to evaluate controls or identify data irregularities. While these tools have served the profession well, providing useful insights and enhanced stakeholder assurance, emerging technologies can deliver even greater value and increase audit effectiveness. With the proliferation of digitization and wealth of data generated by modern business processes, now is an opportune time to extend beyond our well-worn approaches.</p><p>In particular, machine learning (ML) algorithms represent a natural evolution beyond rule-based analysis. Internal audit functions that incorporate ML beyond their existing toolkit can expect to develop new capabilities to predict potential outcomes, identify patterns within data, and generate insight difficult to achieve through rudimentary data analysis. Those looking to get started should first understand common ML concepts, how ML can be applied to audit work, and the challenges likely to arise along the way. </p><h2>What Is Machine Learning?</h2><p>ML is a branch of artificial intelligence (AI) featuring algorithms that learn from past patterns and examples to perform a specific task. How does an ML algorithm "learn," and how is this different from rule-based systems? Rule-based systems generate an outcome by evaluating specific conditions — for example, "If it is raining, carry an umbrella." These systems can be automated — such as through the use of robotic process automation — but they are still considered "dumb" and incapable of processing inputs unless provided explicit instructions.</p><p>By contrast, an ML model generates probable outcomes for "Should I carry an umbrella?" by taking into account inputs such as temperature, humidity, and wind and combining these with data on prior outcomes from when it rained and when it did not. Machine learning can even consider the user's schedule for the day to determine if he or she will likely be outdoors when rain is predicted. With ML models, the best predictor of future behavior is past behavior. Such systems can generate useful real-world insights and predictions by inferring from past examples. </p><p>As an analogy, most people who have built objects using a Lego set, such as a car, follow a series of rules — a step-by-step instruction manual included with the construction toys. After building the same Lego car many times, even without written instructions, an individual would acquire a reasonable sense of how to build a similar car given the Lego parts. Likewise, an ML algorithm with sufficient training — prior practice assembling the Lego car — can provide useful outcomes (build the same car) and identify patterns (relationships between the Lego parts) given an unknown set of inputs (previously unseen Lego parts) even without instructions. </p><h2>Common Concepts</h2><p>The outcomes and accuracy of ML algorithms are highly dependent on the inputs provided to them. A conceptual grasp of ML processes hinges on understanding these inputs and how they impact algorithm effectiveness.</p><p> <strong>Feature</strong> Put simply, a feature is an input to a model. In an Excel table populated with data, one data column represents a single feature. The number of features, also referred to as the dimensionality of the data, varies depending on the problem and can range up to the hundreds. If a model is developed to predict the weather, data such as temperature, pressure, humidity, types of clouds, and wind conditions comprise the model's features. ML algorithms are well-suited to such multidimensional analysis of data.</p><p> <strong>Feature Engineering</strong> In a rule-based system, an expert will create rules to determine the outcome. In an ML model, an expert selects the specific features from which the model will learn. This selection process is known as feature engineering, and it represents an important step toward increasing the algorithm's precision and efficiency. The expert also can refine the selection of inputs by comparing the outcomes of different input combinations. Effective feature engineering should reduce the number of features within the training data to just those that are important. This process will allow the model to generalize better, with fewer assumptions and reduced bias.</p><p> <strong>Label</strong> An ML model can be trained using past outcomes from historical data. These outcomes are identified as labels. For instance, in a weather prediction model, one of the labels for a historical input date might be "rained with high humidity." The ML model will then know that it rained in the past, based on the various temperature, pressure, humidity, cloud, and wind conditions on a particular day, and it will use this as a data point to help predict the future.</p><p> <strong>Ensemble Learning</strong> One common way to improve model accuracy is to incorporate the results of multiple algorithms. This "ensemble model" combines the predicted outcomes from the selected algorithms and calculates the final outcome using the relative weight assigned to each one.</p><p> <strong>Learning Categories</strong> The way in which an ML algorithm learns can generally be separated into two broad categories — supervised and unsupervised. Which type might work best depends on the problem at hand and the availability of labels. </p><ul><li>A <em>supervised learning</em> algorithm learns by analyzing defined features and labels in what is commonly called the training dataset. By analyzing the training dataset, the model learns the relationship between the defined features and past outcomes (labels). The resulting supervised learning model can then be applied to new datasets to obtain predicted results. To assess its precision, the algorithm will be used to predict the outcomes from a testing dataset that is distinct from the training dataset. Based on the results of this training and testing regime, the model can be fine-tuned through feature engineering until it achieves an acceptable level of accuracy. <br><br></li><li>Unlike supervised learning, <em>unsupervised learning</em> algorithms do not have past outcomes from which to learn. Instead, an unsupervised learning algorithm tries to group inputs according to the similarities, patterns, and differences in their features without the assistance of labels. Unsupervised learning can be useful when labeled data is expensive or unavailable; it is effective at identifying patterns and outliers in multidimensional data that, to a person, may not be obvious. </li></ul><h2>Stronger Assurance</h2><p> <img src="/2019/PublishingImages/Lee-overview-of-ML-payment-analytics.jpg" class="ms-rtePosition-2" alt="" style="margin:5px;width:600px;height:305px;" />An ML model's capacity to provide stronger assurance, compared to rule-based analysis, can be illustrated using an example of the technology's ability to identify anomalies in payment transactions. "Overview of ML Payment Analytics" (right) shows the phases of this process.</p><p>Developing an ML model to analyze payment transactions will first require access to diverse data sources, such as historical payment transactions for the last three years, details of external risk events (e.g., fraudulent payments), human resource (HR) data (e.g., terminations and staff movements), and details of payment counterparties. Before feature engineering work can start, the data needs to be combined and then reviewed to verify it is free of errors — commonly called the extract, transform, and load phase. During this phase, data is extracted from various source systems, converted (transformed) into a format that can be analyzed, and stored (loaded) in a data warehouse.</p><p>Next, the user performs feature engineering to shortlist the critical features — such as payment date, counterparty, and amount — the model will analyze. To refine the results, specific risk weights, ranging from 0 to 1, are assigned to each feature based on its relative importance. From experience, a real-world payment analytics model may use more than 150 features. The ability to perform such multidimensional analysis of features represents a key reason to use ML algorithms instead of simple rule-based systems.</p><p>To begin the analysis, internal auditors could apply an unsupervised learning algorithm that identifies payment patterns to specific counterparties, potentially fraudulent transactions, or payments with unusual attributes that warrant attention. The algorithm performs its analysis by identifying the combination of features that fit most payments and producing an anomaly score for each payment, depending on how its features differ from all others. It then derives a risk score for each payment from the risk weight and the anomaly score. This risk score indicates the probability of an irregular payment. </p><p>"Payment Outliers" (below right) illustrates a simple model using only three features, with two transactions identified as outliers. The unsupervised learning model generates a set of potential payment exceptions. These exceptions are followed up to determine if they are true or false. The results can then be used as labels to incorporate supervised learning into the ML model, enabling identification of improper payments with a significantly higher degree of precision. </p><p>Supervised learning models can also be used to predict the likelihood of specific outcomes. By training an algorithm using labels on historical payment errors, the model can help identify potential errors before they occur. For example, based on past events a model may learn that the frequency of erroneous payments is highly correlated with specific features, such as high frequency of payment, specific time of day, or staff attrition rates. A supervised learning model trained with these labels can be applied to future payments to provide an early warning for potential payment errors.</p><p>This anomaly detection model can be applied to datasets with clear groups, though it should not contain significant transactions that differ greatly from most of the data. For instance, the model can be extended to detect irregularities in almost any area, including expenses, procurement, and access granted to employees. </p><h2>Deeper Insights</h2><p> <img src="/2019/PublishingImages/Lee-payment-outliers.jpg" class="ms-rtePosition-2" alt="" style="margin:5px;width:540px;height:511px;" />Continuing with the payment example, an ML model developed to analyze payment transactions can be used to uncover hidden patterns or unknown insights. Examples include: </p><ul><li>Identify overpayment for services by comparing the mean and typical variance in payment amounts for each product type — such as air tickets or IT services — and highlighting all payments that deviate significantly from the mean.<br><br> </li><li>Identify prior unknown emerging needs — such as different departments paying for a new service at significantly different prices — or client types by highlighting payment outliers. This insight could allow executives to optimize the cost for acquired products and services. <br><br></li><li>Identify multiple consecutive payments to a single counterparty below a specific threshold. This analysis would help identify suspicious payments that have been split into smaller ones to potentially escape detection. <br><br></li><li>Identify potential favoritism shown to specific vendors by pinpointing significant groups of payments made to these vendors or related entities. </li></ul><h2>Key Challenges</h2><p>Internal auditors are likely to encounter numerous challenges when applying ML technology. Input quality, biases and poor performance, and lack of experience with the technology are among the most common.</p><p> <strong>Availability of Clean, Labeled Data</strong> For any ML algorithm to provide meaningful results, a significant amount of high-quality data must be available for analysis. For instance, developing an effective payment anomaly detection model requires at least a year of transactional, HR, and counterparty information. Data cleansing, which involves correcting and removing erroneous or inaccurate input data, is often required before the algorithm can be trained effectively. Experience shows that data exploration and data preparation often consume the greatest amount of time in ML projects. Biases in the training data that are not representative of the actual environment will adversely impact the model's output. Also, without good labels — such as labels on actual cyber intrusions — and feature engineering, a supervised learning model will be biased toward certain outcomes and may generate noisy, or meaningless, results.</p><p> <strong>Poor Model Performance and Biases</strong> Most internal audit functions that embark on ML projects will initially receive disappointing or inaccurate results from at least some of their models. Potential sources of failure may include trained models that do not generalize well, poor feature engineering, use of algorithms that are ill-suited to the underlying data, or scarcity of good quality data. </p><p>Overfitting is another potential cause of poor model performance — and one that data scientists encounter often. An ML model that overfits generates outcomes that are biased toward the training dataset. To reduce such biases, internal audit functions use testing data independent of the training dataset to validate the model's accuracy. </p><p>Auditors should also be cognizant of each algorithm's inherent limitations. For example, unsupervised learning algorithms may produce noisy results if the data elements are unrelated and have few or no common characteristics (i.e., no natural groups). Some algorithms work well with inputs that are relatively independent of one another but would be poor predictors otherwise.</p><p> <strong>Lack of Experience</strong> Organizations new to ML may not have examples of successful ML projects to learn from. Inexperienced practitioners can acquire confidence in their fledging capabilities by first applying simple ML models to achieve better outcomes from existing solutions. After these initial successes, algorithms to improve the outcomes of these models can be progressively implemented in stages. For instance, an ensemble learning approach can be used to improve on the first model. If successful, more advanced ML methods should then be considered. This progressive approach can also alleviate the initial skepticism often present in the adoption of new technology.</p><h2>The Future of Audit</h2><p>Machine learning technology holds great promise for internal audit practitioners. Its adoption enables audit functions to provide continuous assurance by enhancing their automated detection capabilities and achieving 100% coverage of risk areas — a potential game changer for the audit profession. The internal audit function of the future is likely to be a data-driven enterprise that augments its capabilities through automation and machine intelligence. <br></p>Ying-Choong Lee1

  • AuditBoard_Dec 2091_Premium 1
  • IIA CIA_Dec 2019_Premium 2
  • IIA AEC_Dec 2019_Premium 3