While monitoring transactions, an alert bank data analyst noticed unusual payments from a computer manufacturer to a casino. Because casinos are heavily computerized, one would expect the payments to go to the computer company. The analyst alerted an investigative agent, who rapidly scoured websites, proprietary data stores, and dark web sources to find detailed information about the two parties. The data revealed that the computer manufacturer was facing a criminal indictment and a civil law suit. Meanwhile, the casino had lost its gambling license due to money laundering and had set up shop in another country. Further investigation revealed the computer manufacturer was using the casino to launder money before the company’s legal issues drove it out of business.
The bank’s data analyst was a machine learning algorithm. The investigative agent was an artificial intelligence (AI) agent.
AI is all around. It’s monitoring financial transactions. It’s diagnosing illnesses, often more accurately than doctors. It’s carrying out stock trades, screening job applicants, recommending products and services, and telling people what to watch on TV. It’s in their phones and soon it will be driving their cars.
And it’s coming to organizations, maybe sooner than people realize. Research firm International Data Corp. says worldwide spending on cognitive and AI systems will be $12 billion this year. It predicts spending will top $57 billion by 2021.
“If you think AI is not coming your way, it’s probably coming sooner than you think it is,” says Yulia Gurman, director of internal audit and corporate security for the Packaging Corporation of America in Lake Forest, Ill. Fresh off of attending a chief audit executive roundtable about AI, Gurman says AI wouldn’t have been on the agenda a year ago. Like most of her peers present, she hasn’t had to address AI within her organization yet. Now it’s on her risk assessment radar. “Internal auditors should be alerting the board about what’s coming their way,” she says.
The Learning Algorithm
Intelligent technology has already found a place on everyday devices. That personal assistant on the kitchen counter or on the phone is an AI. Alexa, Cortana, and Siri can find all sorts of information for people, and they can talk to other machines such as alarm systems, climate control, and cleaning robots.
Yet, most people don’t realize they are interacting with AI. Nearly two-thirds of respondents to a recent survey by software company Pegasystems say they have not or aren’t sure they have interacted with AI. But questions about the technologies they use — such as personal assistants, email spam filters, predictive search terms, recommended news on Facebook, and online shopping recommendations — reveal that 84 percent are interacting with AI, according to the What Consumers Really Think About AI report.
What makes AI possible is today’s massive availability of data and computing power, as well as significant advances in the quality of the machine learning algorithms that make AI applications possible, says Pedro Domingos, a professor of computer science at the University of Washington in Seattle and author of The Master Algorithm. When AI researchers like Domingos talk about the technology, they often are referring to machine learning. Unlike other computer applications that must be written step-by-step by people, machine learning algorithms are designed to program themselves. The algorithm does this by analyzing huge amounts of data, learning about that data, and building a predictive model based on what it’s learned. For example, the algorithm can build a model to predict the risk that a person will default on his or her credit card based on various factors about the individual, as well as historical factors that lead to default.
Alexa, Are You Monitoring Me?
Between them, the world’s e-commerce, social media, and technology companies are getting to know people very well. Amazon knows their shopping habits, Apple and Google know what they search for and what questions they ask, Facebook knows what engages them online, and Netflix knows what they watch on TV.
Artificial intelligence researcher Pedro Domingos says the companies using personalization algorithms are getting to the point where they could build a good model of each of their customers. But if the data they had on those people were consolidated in one place, it would enable an algorithm to build a comprehensive model of each person. Domingos calls this the personal data model.
Imagine an AI algorithm that worked on your behalf, he says — search for your next car, apply for jobs, and even find you a date. “The big technology companies are in a competition to see who can do this better,” he says. “This is something that we’re going to see pick up steam in the next several years.”
Whether that is a good thing or a bad thing may depend on who controls that data. That’s something that worries John C. Havens, executive director of the IEEE Global AI Ethics Initiative. He says the misunderstanding and misuse of personal data is AI’s biggest risk. Despite the benefits of personalization, “most people don’t understand the depth of how their data is used by second and third parties,” he notes.
Havens says there’s a need to reorient that approach now to put people at the center of their data. Such an approach would allow people to gather copies of their data in a personal data cloud tied to an identity source, and set terms and conditions for how their data can be used. “People can still be tracked and get all the benefits,” Havens explains. “But then they also get to say, ‘These are my values and ethics, and this is how I’m willing to share my data.’ It doesn’t mean the seller will always agree, but it puts the symmetry back into the relationship.”
Similarly, Domingos sees an opportunity for a new kind of business that could safeguard a personal data model in the same way that a bank protects someone’s money and uses it on the person’s behalf. “It would need to have an actual commitment to your privacy and to always work in your best interest,” he says. “And it has to have an absolute commitment to ensuring it is secure.”
Driven by Data
Using AI to make predictions takes huge amounts of data. But data isn’t just the fuel for AI, it’s also the killer application. In recent years, organizations have been trying to harness the power of big data. The problem is there’s too much data for people and existing data mining tools to analyze quickly.
That is among the reasons why data-driven businesses are turning to AI. Five industries — banking, retail, discrete manufacturing, health care, and process manufacturing — will each spend more than $1 billion on AI this year and are forecast to account for nearly 55 percent of worldwide AI spending by 2021, according to IDC’s latest Worldwide Semiannual Cognitive Artificial Intelligence Systems Spending Guide. What these industries have in common is lots of good data, says David Schubmehl, research director, Cognitive/AI Systems, at IDC. “If you don’t have the data, you can’t build an AI application,” he explains. “Owning the right kind of data is what makes these uses possible.”
Retail and financial services are leading the way with AI. In retail, Amazon’s AI-based product recommendation solutions have pushed other traditional and online retailers like Macy’s and Wal-Mart Stores Inc. to follow suit. But it’s not just the retailers themselves that are driving product recommendations, Schubmehl says. Image recognition AI apps can enable people to take a picture of a product they saw on Facebook or Pinterest and search for that product — or something similar and less expensive. “It’s a huge opportunity in the marketplace,” he says.
Meanwhile, banks and financial service firms are using AI for customer care and recommendation systems for financial advice and products. Fraud investigation is a big focus. “The idea of using machine learning and deep learning to connect the dots is something that is very helpful to organizations that have traditionally relied on experienced investigators to have that ‘aha moment,’” Schubmehl says.
That’s what happened with the casino and the computer manufacturer. “The way AI works in that scenario is to say, ‘Something is different. Let’s bring it back to the central brain and analyze whether this is risky or not risky,’” says David McLaughlin, CEO and founder of AI software company QuantaVerse, based in Wayne, Pa. “The technology is never going to accuse somebody of a crime or a regulatory violation. What it’s going to do is allow the people who need to make that determination focus in the right areas.”
Currently, IDC says automated customer service agents and health-care diagnostic and treatment systems are the applications where organizations are investing the most. Some of the AI uses expected to rise the most over the next few years are intelligent processing automation, expert shopping advisors, and public safety and emergency response.
Regardless of the use, Schubmehl says it’s the business units that are pushing organizations to adopt AI to advance their business and deal with potential disrupters. Because of the computing power needed, most industries are turning to cloud vendors, some of whom may also be able to help build machine learning algorithms.
Is AI Something to Fear?
Despite its potential, there is much fear about the risks that AI poses to both businesses and society at large. Some worry that machines will become too smart or get out of control.
There have been some well-publicized problems. Microsoft developed an AI chatbot, Clippy, that after interacting with people, started using insulting and racist language and had to be shut down. More recently, Facebook shut down an experimental AI system after its chatbots started communicating with each other in their own language, in violation of their programming. In the financial sector, two recent stock market “flash crashes” were attributed to AI applications with unintended consequences.
Respondents to the World Economic Forum’s (WEF’s) 2017 Global Risks Perception Survey rated AI highest in potential negative consequences among 12 emerging technologies. Specifically, AI ranked highest among technologies in economic, geopolitical, and technological risk, and ranked third in societal risk, according to the WEF’s Global Risks Report 2017.
Employment One of the biggest concerns is whether AI might eliminate many jobs and what that might mean to people both economically and personally. Take truck driving, the world’s most common profession. More than 3 million people in the U.S. earn their living driving trucks and vans. Consulting firm McKinsey predicts that one-third of commercial trucks will be replaced by self-driving vehicles by 2025.
|The Jobs Question|
By now, internal auditors may be asking themselves, “Is AI going to take my job?” After all, an Oxford University study rated accountants and auditors among the professionals most vulnerable to automation. Of course, internal auditors aren’t accountants. But are their jobs safe?
Actually, AI may be an opportunity, says IDC’s David Schubmehl. He says many of the manual processes internal auditors review are going to be automated. Auditors will need to check how machine learning algorithms are derived and validate the data on which they are based. And, they’ll need to help senior executives understand AI-related risks. “There’s going to be tremendous growth in AI-based auditing, looking at risk and bias, looking at data,” Schubmehl explains. “Auditors will help identify and certify that machine learning and AI applications are being fair.”
Using AI to automate business processes will create new risks for auditors to address, says Deloitte & Touche LLP’s Will Bible. He likens it to when organizations began to deploy enterprise resource planning systems, which shifted some auditors’ focus from reviewing documents to auditing system controls. “I don’t foresee an end to the audit profession because of AI,” he says. “But as digital transformation occurs, I see the audit profession re-evaluating the risks that are relevant to the audit.”
According to the Pew Research Center’s recent U.S.-based Automation in Everyday Life survey, 72 percent of respondents are worried about robots doing human jobs. But only 30 percent think their own job could be replaced (see “The Jobs Question” at right). That may be wishful thinking. “However long it takes, there’s not going to be any vertical industry where there’s not the opportunity to automate humans out of a job,” says John C. Havens, executive director of the IEEE Global AI Ethics Initiative. He says that will be the case as long as businesses are measured primarily by their ability to meet financial targets. “The bigger question is not AI. It’s economics.”
Ethics With organizations racing to develop AI, there is concern that human values will be lost along the way. Havens and the IEEE AI Ethics Initiative are advocating for putting applied ethics at the front end of AI development work. Consider the emotional factors of children or elderly persons who come to think of a companion robot in the same way they would a person or animal. And who would be accountable in an accident involving a self-driving car — the vehicle or the person riding in it?
“The phrase we use is ‘ethics is the new green,’” Havens explains, likening AI ethics to the corporate responsibility world. “When you address these very human aspects of emotion and agency early on — much earlier than they are addressed now — then you build systems that are more aligned to people’s values. You avoid negative unintended consequences and you identify more positive opportunities for innovation.”
Privacy and Security Using AI to gather data poses privacy risks for both individuals and businesses. All those personal assistant requests, product recommendations, and customer service interactions are gathering data on people — data that organizations eventually could use to build a comprehensive model about their customers. Organizations using personalization agents must walk a fine line. “You want to personalize something to the point where you can get the purchase offer,” Schubmehl says, “but you don’t want to personalize it so much that they say, ‘This is really creepy and knows stuff about me that I don’t want it to know.’”
All that data creates a compliance obligation for organizations, as well. And it is also valuable to cyber attackers.
Output Although AI has potential to help organizations make decisions more quickly, organizations need to determine whether they can trust the AI model’s recommendations and predictions. That all depends on the reliability of the data, Domingos says. If the data isn’t reliable or it’s biased, then the model won’t be reliable either. Moreover, machine learning algorithms can overinterpret data or interpret it incorrectly. “They can show patterns,” he points out. “But there are other patterns that would do equally well at explaining what you are seeing.”
Control If machine learning algorithms become too smart, can they be controlled? Domingos says there are ways to control machine learning algorithms, most notably by raising or lowering their ability to fit the data such as through limiting the amount of computation, using statistical significance tests, and penalizing the complexity of the model.
He says one big misconception about AI is that algorithms are smarter than they actually are. “Machine learning systems are not very smart when they are making important decisions,” he says. Because they lack common sense, they can make mistakes that people can’t make. And it’s difficult to know from looking at the model where the potential for error is. His solution is making algorithms more transparent and making them smarter. “The risk is not from malevolence. It’s from incompetence,” he says. “To reduce the risk from AI, what we need to do is make the computer smarter. The big risk is dumb computers doing dumb things.”
Knowledge Domingos says concerns about AI’s competence apply as well to the people who are charged with putting it to use in businesses. He sees a large knowledge gap between academic researchers working on developing AI and the business employees building machine learning algorithms, who may not understand what it is they are doing. And he says, “Part of the problem is their bosses don’t understand it either.”
Governance That concern for governance is one area the WEF’s Global Risk Report questions — specifically, whether AI can be governed or regulated. Components of AI fall under various standards bodies: industrial robots by ISO standards, domestic robotics by product certification regulations, and in some cases the data used for machine learning by data governance and privacy regulations. On their own, those pieces may not be a big risk, but collectively they could be a problem. “It would be difficult to regulate such things before they happen,” the report notes, “and any unforeseeable consequences or control issues may be beyond governance once they occur.”
AI in IA
Questions of risk, governance, and control are where internal auditors come into the picture. There are similarities between deploying AI and implementing other software and technology, with similar risks, notes Will Bible, audit and assurance partner with Deloitte & Touche LLP in Parsippany, N.J. “The important thing to remember is that AI is still computer software, no matter what we call it,” he says. One area where internal auditors could be useful, Bible says, is assessing controls around the AI algorithms — specifically whether people are making sure the machine is operating correctly.
If internal auditors are just getting started with AI, their external audit peers at the Big 4 firms are already putting it to work as an audit tool. Bible and his Deloitte colleagues are using optical character recognition technology called Argus to digitize documents and convert them to a readable form for analysis. This enables auditors to use data extraction routines to locate data from a large population of documents that is relevant to the audit.
For auditors, AI speeds the process of getting to a decision point and improves the quality of the work because it makes fewer mistakes in data extraction. “You can imagine a day when you push a button and you’re given the things you need to follow up on,” Bible says. “There’s still that interrogation and investigation, but you get to that faster, which makes it a better experience for audit clients.”
QuantaVerse’s McLaughlin says internal auditors could take AI even farther by applying it to areas such as fraud investigation and compliance work. For example, rather than relying on auditors or compliance personnel to catch potential anti-bribery violations, internal audit could use AI to analyze an entire data set of expense reports to identify cases of anomalous behavior that require the most scrutiny. “Now internal audit has the five cases that really need a human to understand and investigate,” McLaughlin says. “That dramatically changes the effectiveness of an internal audit department to protect the organization.”
The key there is making sure a person is still in the loop, Bible says. “The nature of AI systems is you are throwing them into situations they probably have not seen yet,” he notes. A person involved in the process can evaluate the output and correct the machine when it is wrong.
Bible and McLaughlin both advise internal audit departments to start with a small project, before expanding their use of AI tools. That goes for the organization, as well. Organizations first will need to take stock of their data assets and get them organized, a task where internal auditors can provide assistance.
For audit executives such as Gurman, the objective is to get up to speed as fast as possible on AI and all its related risks, so they can educate the audit committee and the board. “There is a lot of unknown,” she concedes. “What risks are we bringing into the organization by being more efficient and using robots instead of human beings? Use of new technologies brings new risks.”
AI in the Real World
Still think artificial intelligence is science fiction? Here are some examples of how companies are putting it to use.
Agriculture. Produce grower NatureSweet uses AI to examine data that it can apply to better control pests and diseases that affect crop production. The company estimates AI could help it increase greenhouse output by 20 percent annually, CNN reports. Meanwhile, equipment maker John Deere recently spent $305 million to purchase robotics firm Blue River Technology, whose AI-based equipment scans fields, assesses crops, and sprays weeds only where they are present. And Coca-Cola uses AI algorithms to predict weather patterns and other conditions that might impact crop yields for its orange juice products.
Aviation. GE Digital uses AI to cull through data collected from sensors to assess the safety and life expectancy of jet engines, including their likelihood for failure. The company estimates that a single flight can generate as much data as a full day of Twitter posts.
Finance. Machine learning enables lawyers and loan officers at JPMorgan to identify patterns and relationships in commercial loan agreements, a task that once required 360,000 man hours, Bloomberg reports. Bank of America uses natural language technology to extract information from voice calls that might reveal things like sales practice or regulatory issues. On the stock market, an estimated 60 percent of Wall Street trades are executed by AI, according to Christopher Steiner's book
Marketing. Kraft used an AI algorithm to analyze customer preference data that helped it make changes to its cream cheese brand.
Retail. Fashion retailer Burberry's applies AI-based image recognition technology to determine whether products in photographs are genuine, spotting counterfeits with 98 percent accuracy, according to a
Forbes Online report.