Thank You!

You are attempting to access subscriber-restricted content.

Are You Ready to Experience Everything Internal Auditor (Ia) Has to Offer?

​Trust in Technology

Internal auditors can provide assurance that sophisticated data tools are living up to ethical standards and meeting legal requirements.


Comments Views

Cutting-edge technologies in artificial intelligence (AI) and machine learning are transforming the way businesses operate and opening up new commercial opportunities for organizations to leverage data. But such progress comes with risks: The technology is not infallible, and companies that are becoming increasingly reliant on it rarely question how the process works, whether it is ethical or trustworthy, or what harm it could cause.

Countless examples show that machine-learning systems can generate prejudicial output — from gender-recognition cameras that only work on white men to algorithms that display ads for lower paying jobs to women. These problems occur because the data that trains AI programs often reflects the biases of its human compilers, while machine-learning systems are molded entirely by their imperfect learning environment. As such, if the input data is skewed and one-dimensional, and the environment from which the data is sampled is similarly restricted, the output will be wholly predictable.

For example, if an online executive recruitment AI system is trained on the resumes of Fortune 500 or FTSE100 companies, the technology will assume it should be targeting white, middle-aged men to fill CEO and board-level roles. Without appropriate checks and balances, experts say, AI systems will just perpetuate the bias that exists in the real world.

“The central problem is that neural networks operate by seeking patterns in data rather than following clear rules of logical inference,” says James Loft, chief operating officer of intelligent automation firm Rainbird in London. “This means they can easily draw irrational conclusions from data, and it can be difficult for humans to understand the causes of their biases.”

And bias is far from the only risk. Data from sophisticated technology also can be manipulated to mislead or deceive, resulting in fraud or other harm to the organization. The output also may run afoul of legal provisions as well as organizational policy. Internal auditors can help keep a watchful eye on the use of these cutting-edge tools, ensuring consistency with ethical requirements and awareness of organizational risks.

Multiple Points of Exposure

Experts say biases can easily be introduced into AI technologies because — at their most basic level — they operate relatively simply: Programs process data that is fed into them, following a predefined algorithm, and then generate outputs. “There is scope for manipulation in the design and operation of all three of these stages,” says Paul Herring, global chief innovation officer at professional services firm RSM Global in London.

For example, he says, it is possible to select the input data in a way that is intended to deliberately skew results. If a financial services firm wanted to attract investors to put money into a Ponzi scheme, for instance, it could generate a misleading report by selecting a sample of existing customers that only includes those who had made enormous returns. Unsurprisingly, the report would show amazing results.

Furthermore, the algorithm or functions applied to the data could be defined in a way to generate skewed results. Continuing with the Ponzi scheme example, even if the inputs included all investors — both winners and losers — the program or algorithm could be defined to ignore the losers or inflate the performance of investors. And even if these first two steps are unbiased and appropriately configured, the report can still be manipulated to highlight certain findings or suppress others.

To protect themselves from these risks, Herring says, companies — and internal auditors — need to ask questions about how the technology works in practice, and what safeguards it either has built into it, or needs to establish. “It is important to gain an understanding of the methods used by the program to execute the capture and processing of data as well as reporting results,” Herring says. He adds that auditors should inquire “about any built-in biases in each stage.”

Speed and Overreliance

Several experts point out that data has always had biases in the way it is used. The problem is that “AI has the potential to produce and replicate these biases more quickly in its decision-making processes,” says Nathan Colaner, senior instructor, director of business analytics, at Seattle University.

“The job of machine learning technologies is to predict outcomes from the data it is being fed, but any ‘prediction’ is a judging in advance, or pre-judging,” Colaner says. “As a result, no one should be surprised that the decisions it makes could be prejudiced.”

One of the main concerns Colaner has about AI adoption is that organizations become overreliant on the technology and algorithms. “Organizations tend to get swept up with the possibilities that technology allows them to embrace,” he says. “However, while algorithms are an important tool, they should not be used as a crutch — the information they produce is just one source of information among several sources available to the business. Just because the information is produced quickly by a machine does not mean that it is complete and trustworthy.”

Consequently, internal auditors should ask what safeguards are in place to interrogate the integrity of the data used by the algorithm, and what measures exist to question the resulting outcomes it produces, Colaner says. They also should ask what the perimeters of the algorithm are meant to be, and whether machine learning is producing the agreed objectives, he adds.

However, Colaner also says there is a significant risk of organizations turning a blind eye to how an algorithm produces data. “Too many organizations focus on the results of the process, rather than look at — or even question — the process itself,” he says. “There needs to be more skepticism around AI-produced decision-making. At the moment, however, there is a tendency to just accept the results without questioning how they were arrived at.”

The Need for Transparency

Steve Mintz, professor emeritus of accounting at California Polytechnic State University in San Luis Obispo, says there needs to be full transparency and disclosure about how AI machines are generating data and decisions, how that data is being used within the organization, and what the outcomes of such data use are, both for organizations and individuals. He says internal audit functions should be working with the organization’s IT team so they understand:

  • The technology and its risks.
  • What the technology is meant to achieve for the organization.
  • What safeguards the technology team has put in place to prevent bias.
  • What measures the team has established to alert the organization that decisions produced by the technology may be flawed.

Mintz also says internal auditors can help manage ethical risks, including the risk that internal fraudsters compromise the data. “If you can’t trust the level of transparency about how data is being used, then how can you trust the system?” he asks. “There needs to be better explainability and auditability around every part of a process in which a machine makes a decision — plain and simple.”

To check whether the source data is being used appropriately, Ali Hessami, a London-based advisor at technology standards-setter the Institute of Electrical and Electronics Engineers (IEEE), says internal auditors should ask who will ultimately use the results from the analysis. Potential recipients include board members, salespeople, employees, customers, and business partners. Will those individuals or groups use the data to facilitate business decision-making, or perhaps to help identify risks or boost sales? Hessami says organizations should ask themselves who should — and should not — be able to access the data, and what internal controls might be necessary to ensure the data is kept safe from potential unauthorized internal use or external hacking.

“It is important for internal audit to establish who will be impacted by the use of the results, how they will be impacted, and whether the rights, freedoms, or opportunities of any individuals or groups could be affected by use of the analyzed data,” Hessami says. Internal auditors, he adds, need to question whether the organization has explicit permission, as well as the data subjects’ informed consent, to access the data necessary for analysis.

Other experts agree that transparency around data collection and use is paramount. Maurice Coyle, chief data scientist at data analytics specialist Truata in Dublin, says developers, IT vendors, and IT departments should be able to justify their decisions and opinions, and audit teams should be querying those justifications to understand their root.

“Above all else, companies should always be asking developers ‘Why do you think that?’” Coyle says. “Internal audit teams should make sure they understand the reasoning behind what developers implement. Understanding the root of these decisions is the key to gaining assurance that the technology will not cause harm through its processes or outcomes.”

For Peter van der Putten, assistant professor at Leiden University in the Netherlands and director of AI decisioning at software vendor Pegasystems, companies “should favor transparency over accuracy so they know in detail how an AI program arrived at each decision and can then explain this to a customer.” Privacy regulations, such as the European Union’s General Data Protection Regulation, require that companies possess this capability.

Furthermore, van der Putten says internal auditors should ask specifically whether predictive models and the logic behind them are transparent and tested for bias. He adds that auditors should question “whether the AI systems are ‘black box’ machine learning systems, or whether it is possible to impose ethical policies, rules, and constraints on top of them to keep these learning systems under control.”

Data Governance

At the heart of checking the effectiveness — as well as the shortfalls and dangers — of AI technologies, van der Putten says, is the need to establish robust AI and data governance. “AI governance will soon become a real discipline, and more importantly should not just be a matter of guidelines on paper for people and processes,” he says. “It needs to be translated and operationalized into practical guardrails and encoded into AI technical platforms, models, and rules.”

According to van der Putten, any governance framework should include “tangible definitions and levers” of trade-offs between the company’s objectives and those of the customer when it comes to automated decisions. It also should include procedures for appropriate measurement of bias in models and business rules, while recognizing that bias detection should not just be a single step in a release cycle for new models and rules. “The framework should be measured in an ongoing, continuous basis, as the most modern AI systems are actually learning and optimizing themselves live, in real time,” he says.

Tim Mackey, principal security strategist at software provider Synopsys’ Cybersecurity Research Centre in Boston, says ethically focused governance should include not only an understanding of how data was collected but “how informed any data subjects were to the current or proposed use of their data.” When consumers provide their data, he says, there is an implicit expectation that only the required minimum of data is requested, and that both usage and retention of provided data is aligned with the original transaction or consent. “When data collection, processing scope, or retention are misaligned with consumer expectations,” he says, “data governance risks increase.”

Seeking Assurance

In many organizations IT and technology risks remain the domain of the IT professionals, as they have the necessary in-house expertise to understand the process as well as the risks. But this approach presents the problem of IT functions essentially reviewing their own work and potentially downplaying risks related to any initiative for which they are responsible. As such, internal audit needs to grasp the nettle and ensure it is in a position to challenge the way AI is used in the organization and become actively involved in AI project development.

While many internal audit functions may not have the resources or in-house technical skills to audit AI technologies in the way they would like, this should not deter internal audit from doing its job — asking questions and seeking assurance. “Expert knowledge is obviously useful, but you don’t need to be a technical expert — nor do you need to understand everything about data and AI,” says Jim Pelletier, vice president of Standards and Professional Knowledge at The IIA. “You just need to know enough to be able to ask good questions, understand your knowledge gaps, and bring in the right resources when they are needed.”

Pelletier says internal auditors should approach AI just as they would handle risks associated with a software upgrade or other technology implementation. “The types of questions you need to ask to gain the necessary level of understanding and assurance are largely the same,” he says.

As trusted advisors, internal auditors need to tell management that data ethics must align with corporate ethics, Pelletier says. Ideally, they also should be involved as early as possible in the discussions about how the organization is going to use AI to further the business, and how data will be leveraged to help achieve those objectives.

“Internal audit can provide insights and advice in the establishment of project governance processes early on,” he says. “That way, the tech team will not just focus on what the technology can do, but also on achieving business objectives ethically while maintaining compliance with data privacy rules at the heart of the project. Internal audit can review what testing has been done to ensure compliance, how rigorous this testing was, and how the results were reported to — and understood by — management.”

Pelletier adds that getting involved in the project from the start also can help the organization realize its goals, especially given that IT projects often fall short of intended results. He points to surveys noting examples of project managers checking to ensure technology is functioning correctly instead of determining whether it is an appropriate solution for the business. “Having internal audit involved early and asking whether the technology is doing what it is designed to do can save a lot of time and money in the long run,” he says.

Powerful, But Not Perfect

AI is a powerful tool — but like anything else, it has its limits. Organizations should come to terms with that fact and remain skeptical about the information the technology produces. And because AI is not 100% trustworthy, internal auditors have a key role in monitoring its usage and the decision-making processes it controls.

Neil Hodge
Internal Auditor is pleased to provide you an opportunity to share your thoughts about the articles posted on this site. Some comments may be reprinted elsewhere, online or offline. We encourage lively, open discussion and only ask that you refrain from personal comments and remarks that are off topic. Internal Auditor reserves the right to remove comments.

About the Author

 

 

Neil HodgeNeil Hodge<p>​Neil Hodge is a freelance journalist based in Nottingham, U.K.</p>https://iaonline.theiia.org/authors/Pages/Neil-Hodge.aspx

 

Comment on this article

comments powered by Disqus
  • Auditboard-August-2020-Premium-1
  • GRC-2020-August-2020-Premium-2
  • Online-Testing-August-2020-Premium-3