Thank You!

You are attempting to access subscriber-restricted content.

Are You Ready to Experience Everything Internal Auditor (Ia) Has to Offer?

​Auditing Artificial Intelligence

Internal auditors can develop a framework for conducting AI engagements, despite a lack of standards and guidance.

Comments Views

​The development of artificial intelligence (AI) has become a national strategic imperative for countries as diverse as Canada, China, France, Russia, the U.K., and the U.S. Likewise, the global AI software market is expected to expand from approximately $10 billion in 2019 to $125 billion by 2025, according to technology research firm Omdia|Tractica. As AI grows in importance and use, internal audit’s role must evolve in lockstep with the technology to address a variety of new challenges. 

One of the biggest challenges is that standards development is in the early stages, leaving internal auditors with little guidance on how to audit the output of AI decisions. Organizations such as the American National Standards Institute, U.S. National Institute of Standards and Technology, and the U.K. Information Commissioner’s Office have begun developing AI frameworks and standards. However, this process will take time as different standard-setters focus on specific areas such as data privacy, ethical use, and the technical design of AI systems. Because internal audit functions will need to develop their own frameworks for auditing AI systems, they should prepare by getting up to speed on the various standards development initiatives.

Overseeing New Risks

AI has become a buzzword used to describe even simple automation, but the differences between the two technologies are important to understand. Each level of automation plays a valuable role but has different uses. The lowest level of automation, robotic process automation, follows strict rules and when programmed correctly executes repetitive processes such as automating accounting workflows, collecting data, and automatically transferring information. 

AI has come into the picture as organizations have begun to combine one or more levels of intelligent automation to achieve higher performance and operational efficiency. AI may become the most disruptive technological development to date, creating new opportunities and risks in every aspect of business and life. 

A U.S. Department of Homeland Security white paper, AI: Using Standards to Mitigate Risks, discusses the key risks to national security and intellectual property in using AI systems. In addition, some accounting professionals have advocated for formal redress of the risks and ethical standards, including algorithmic bias, data management, and privacy issues in audits, according to the Pennsylvania CPA Journal.

AI will require new oversight models such as human–machine collaboration in decision-making processes typically reserved for management. These models will need clear ground rules of engagement between audit and management. At the same time, AI gives internal audit opportunities to provide assurance and to advise business leaders on the safe implementation and use of AI systems.

Engagement Planning Considerations 

There should not be a one-size-fits-all audit framework for AI, nor should any standard remain static given the rapid pace of change in technology. However, there are common elements internal audit should consider when preparing for an AI engagement. Developing an AI audit framework can capture key areas of focus, including:

  • AI ethics and governance models.
  • Formal standards and procedures for implementing AI engagements.
  • Data and model management, governance, and privacy.
  • Human–machine integration, interactions, decision support, and outcome. 
  • Third-party AI vendor management.
  • Cybersecurity vulnerability, risk management, and business continuity.


Until professional audit standards and procedures are developed, each of these elements of the AI “ecosystem” should be considered in the risk assessment phase of audit planning. In establishing a risk-based audit plan, internal audit may need to reexamine risks across the organization as a result of implementing AI. 

AI represents a novel risk, meaning internal auditors may not have historical observations or risk data from which to draw inferences on the scope of risk. Auditors should consider risk assessments of each of the six elements in the context of how AI will be applied. 

AI has already been applied in diverse industries, each with its own inherent risks. The integration of human–machine interactions where AI decisions are relied on or used in conjunction with human actors represents dynamic risks requiring higher levels of attention. The Boeing 737 MAX airplane involved in high-profile disasters, such as the Lion Air crash in Indonesia in 2018 and the Ethiopian Air crash in 2019, is one example where overreliance on AI and an underestimation of risks can result in catastrophic failure. This can happen even in industries with more experience with the technology such as aerospace, where AI has been used to assist pilot performance.

Internal audit should work with senior executives and the board to establish ethical standards and governance models for using AI. The technology will require clarity on data privacy, data governance, vendor management, human resources, compliance, cybersecurity, and risk management functions and policy. 

The organization may need cross-functional teams of oversight and business leaders to establish new operating models from which audit assurance can be formally established for each impacted area. Organizations will need a process for analyzing and choosing among the options presented by AI. For example, if a hospital uses AI for patient diagnosis, doctors and nurses will need to interpret its analysis and determine its accuracy for patient care. Subjective risk assessments will be insufficient in an AI world where the risks may be exponential in reputational damage and threaten the organization’s survival.

Getting Started

The AI ecosystem framework is an informal checklist for thinking about audit’s role and structure for supporting AI projects or planning for an engagement of intelligent systems. No two AI projects are exactly alike, and all organizations are learning as they go, so internal auditors shouldn’t worry about experience at this point. 

AI is not a traditional engagement, but many existing IT standards cover the basics. Several countries are in the early stages of developing guidance that leverages existing standards and builds new standards to provide consistency in assurance. 

Unfortunately, factors related to corporate culture and intended uses of AI will require engagement at the enterprise level to build a sustainable audit practice. Yet, even if more formal guidance and standards take time to develop, this is an exciting time for internal audit to play a leadership role in providing assurance for AI.  

James Bone
Internal Auditor is pleased to provide you an opportunity to share your thoughts about the articles posted on this site. Some comments may be reprinted elsewhere, online or offline. We encourage lively, open discussion and only ask that you refrain from personal comments and remarks that are off topic. Internal Auditor reserves the right to remove comments.

About the Author

 

 

James BoneJames Bone<p>James Bone is lecturer in discipline enterprise risk management at Columbia University in New York and president of Global Compliance Associates in Lincoln, R.I.​</p><style> p.p1 { line-height:9.0px; font:8.0px 'Interstate Light'; } span.s1 { font:8.0px Interstate; letter-spacing:-0.1px; } span.s2 { letter-spacing:-0.1px; } </style>https://iaonline.theiia.org/authors/Pages/James-Bone.aspx

 

Comment on this article

comments powered by Disqus
  • FastPath-October-2020-Premium-1
  • AuditBoard-October-2020-Premium-2
  • CIALS-October-2020-Premium-3