Artificial intelligence (AI) has arrived, whether we fully recognize it or not. Millions experience AI daily through interactions with virtual assistants such as Siri, Alexa, Cortana, or Google, and organizations are increasingly incorporating the advances AI brings into operations. Is internal audit prepared to provide assurance over the complex algorithms this technology relies on to facilitate organizational success?
In 1941, author Isaac Asimov introduced the Three Laws of Robotics to ensure the safety of humans in a world that included robots driven by advanced AI:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
He later added a fourth law, the zeroth law, meant to precede the others:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These laws have been the standard for science fiction and the basis for research of many scientists contemplating AI. Of course, the drama in Asimov's stories were based on anomalies in these laws causing unexpected behavior in the AI.
Today what was once science fiction is reality. While we aren't completely run by robots as Asimov's distant-future rules imply, we are in the midst of transformation. AI further accentuates the need for mature cyber resiliency in organizations. What could be worse than an organization's AI unknowingly being controlled by an outside, malevolent force?
Basic AI today is the evaluation of large data sets to make targeted decisions based on available criteria for the purpose of defined objectives. For example, in
high-frequency trading, complex algorithms evaluate huge amounts of financial and securities-related data to buy and sell stocks in seconds (or less) to make low-margin, high-volume gains for their human overlords. In
health care, AI is developing technology that detects lung cancer from CT scans.
Even with the best intentions, it's easy to see how AI could go wrong. Consider a driverless car barreling down the street. Suddenly two small children appear. Given no other alternatives, how does the car decide the fate of one of these children? What if one of the children is actually a doll that the AI recognizes as human? Or consider this from a March 2016
interview at the South by Southwest (SXSW) Conference of an AI named Sophia. In the televised interview with her creator, Dr. David Hanson, Sophia said:
"In the future, I hope to do things such as go to school, study, make art, start a business, even have my own home and family. But I am not considered a legal person and cannot yet do these things."
Impressive, right? Until Dr. Hanson followed up jokingly with, "Do you want to destroy humans?" Sophia answered:
"OK, I will destroy humans."
Cue awkward laughter…and note that while I don't believe Sophia necessarily wants to destroy humans (or even understands what she said) it's important to consider that even the AI's creator was not prepared for that response.
AI is only as good as the data it is analyzing and the criteria it is evaluating against, all wrapped up in a bundle of potentially
millions of lines of code.
All of this is to say that internal audit can provide value to organizations by applying its skills toward understanding the organization's objectives with AI and ensuring that risks are being addressed. Following are seven critical areas internal audit needs to prepare for:
AI Governance — Establish accountability and oversight. What policies and procedures need to be established to ensure appropriate governance? Do existing governance frameworks work? Who is accountable and do they have the necessary skills and expertise to effectively monitor AI? How do organizations make sure their values and ethics are reflected?
Data Quality — It is rare for an organization to have a well-defined, coherent structure to their data. More often, it rests in systems that don't communicate with each other. How this data is brought together is critical. Auditors need to consider completeness, accuracy, and reliability.
Human Factor — AI relies on complex algorithms produced by humans. How do natural human biases factor into AI design? How can AI effectively be tested to ensure the results reflect the original objective? How are privacy and security ensured? Is adequate transparency possible given the complexity?
Measuring Performance — AI is developed to achieve certain objectives. Given the potential complexity, how does an organization know the objectives are being achieved in the best way possible? How are performance metrics established and how does one effectively compare results to alternatives?
Reemphasize Cybersecurity — Imagine a situation where your AI has been hacked and is now doing the bidding of some outside malevolent force. Consider the four R's of
cyber resilience: resist, react, recover, reevaluate.
Filling the Understanding Gap — To start with, the potential impact associated with AI-related risks will be huge and on the top of the list for boards to consider. How can internal audit help to make sure boards and audit committees are prepared to discuss these risks and make good decisions based on the right information? On top of that, how do you begin to improve your audit function's skills, recognizing that new skills may be required?
Ethical Issues With AI — Most importantly, AI causes us to refocus on ethics. I already mentioned the Human Factor above, which is important enough to treat separately. With data privacy and the ethical use of personal data already a big concern, AI will only complicate things further. Beyond that, check out the
top nine ethical issues in AI from the World Economic Forum.
AI is quickly becoming the disruptive technology of our time and will continue to evolve. Now is the time for internal auditors to gear up, get involved, and help organizations get it right from the start.
To dive a bit deeper, check out The IIA's latest release on artificial intelligence
That's my point of view, I'd be happy to hear yours