Deepfake Deception​

Artificial intelligence is opening a new avenue to defraud organizations.

Comments Views

​The CEO just called asking you to send a wire transfer. But are you sure it's the CEO? That voice that sounds like the organization's leader may be a deepfake — an audio or video file that has been created using artificial intelligence.

Deepfakes are becoming the latest lure in phishing schemes, PC Magazine reports. Recently, hackers tricked a managing director at a British energy company into authorizing a $243,000 wire transfer to an account in Hungary by creating a fake voice model that sounded like the company's CEO, according to The Wall Street Journal. In an email, the employee told the company's insurance carrier, Euler Hermes, that "the voice was so lifelike that he felt he had no choice but to comply," The Washington Post says. Cybersecurity firm Symantec told the Post that it knew of three recent incidents in which attackers mimicked the voices of executives to defraud companies.

Lessons Learned

Any new technology or societal advance seems to inevitably raise opportunities for fraudsters to benefit at the expense of organizations and individuals. As this news story illustrates, the threat of deepfake fraud and phishing is here and likely to grow. In fact, deepfake audio and video is becoming cheap and easy to create with computers and software, and how-to videos are showing up on social media.

Deepfake video and audio files are not necessarily bad, as we have seen with some educational and comedic videos on late night TV. The problem is when they are used for crime. What can regulators, organizations, and internal auditors do to identify and counter this threat before it causes damage?

  • First and foremost, fraud detection and prevention is a cat-and-mouse exercise — what works now may not last as a long-term solution. Therefore, regulators, organizations, and internal auditors need to educate themselves on how the AI behind deepfakes can be used to defraud.

  • Implementing a two-step verification process where sensitive information, money, or decisions are being sought, is essential. Auditors should keep in mind the concept of "never trust, always verify." That verification process can be as simple and low-tech as a mandatory return phone call to verify sources. Technology-based verification includes requiring the requestor to enter an encrypted passcode separately, and subjecting the request, regardless of its form, to computer-based audio/video analysis to verify its authenticity before taking action.

  • While human beings inevitably will bear the brunt of having to respond to these deepfake fraud attacks, a machine-based approach will be more effective — but only under certain conditions. People still need to learn more about this threat and be equipped to address it.

  • One tool that shows promise is a recurrent neural network (RNN). An RNN is a class of artificial neural network in which connections between nodes form a pattern along a temporal sequence, allowing it to exhibit temporal dynamic behavior that can be applied to handwriting, speech, or visual recognition. These networks can be trained to identify inconsistencies.

    Applied to video deepfakes, an RNN could identify inconsistencies in lighting conditions, shadows, reflections, or even an entire face, including physiological elements such as mouth movement, blinking, and breathing. This is possible because the algorithms used to build a deepfake work frame by frame but cannot remember what is created for previous frames.

    While RNN technology can be expensive and may be best suited to protecting against deepfakes of senior executives, software companies currently are developing products that will be more cost-scalable and readily deployed across larger organizations. Such technology could be used in real time to verify the authenticity of a video or audio request as it comes in. For example, Adobe has developed AI that can detect faces that have been manipulated in Photoshop. Another example uses blockchain technology along with AI to create a digital signature that cannot be altered, and will identify attempts to alter it, for embedding in legitimate audio and video.

  • Academic research and collaboration also is needed to understand deepfakes and other forms of manipulated media. Since deepfake videos can go viral on social media, these sites already are working to combat the threat.

    For example, Facebook deploys engineering teams that can spot manipulated photos, audio, and video. In addition to using software, Facebook and other social media companies hire people to look for deepfakes manually. Similarly, the AI Foundation, a nonprofit organization that focuses on human and AI interaction, conducts research into these issues.
Art Stewart
Internal Auditor is pleased to provide you an opportunity to share your thoughts about the articles posted on this site. Some comments may be reprinted elsewhere, online or offline. We encourage lively, open discussion and only ask that you refrain from personal comments and remarks that are off topic. Internal Auditor reserves the right to remove comments.

About the Author

 

 

Art StewartArt Stewart<p>​Art Stewart is an independent management consultant with more than 35 years of experience in internal audit, financial management, performance measurement, governance, and strategic policy planning.​​​</p>https://iaonline.theiia.org/authors/Pages/Art-Stewart.aspx

 

Comment on this article

comments powered by Disqus
  • AuditBoard_Dec 2091_Premium 1
  • IIA CIA_Dec 2019_Premium 2
  • IIA AEC_Dec 2019_Premium 3