The New York Times reported last week on what it called a "summer of crippling ransomware attacks" on Texas cities. Hackers are simultaneously holding hostage computer systems in 22 cities across the state and demanding millions of dollars. Most of the targets are small towns with limited resources and are least likely to update their cybersecurity systems or back up their data, according to the Times.
This is another reminder of the vulnerability of IT systems. Lean cybersecurity budgets in the public sector make those systems particularly susceptible to breaches. In light of these latest incidences, here is a blog post I published last year about a ransomware attack on Atlanta's IT system. Then and now, it offers several valuable lessons and insights for internal auditors.
In the summer of 2017, internal auditors for the city of Atlanta warned officials that their IT systems could be easily compromised if they weren't fixed immediately. The audit report minced no words, calling out the lack of resources (tools and people) available to address the "thousands of vulnerabilities" and characterizing the situation as a "significant level of preventable risk exposure," according to news media reports.
The city apparently began to implement certain security measures, but it was a classic case of too little, too late. A ransomware attack — essentially digital extortion — crippled the city's computer network and took many departments nearly into the "Dark Ages" of pen and paper. The breach even shut down Wi-Fi service at Atlanta International Airport. Fortunately, critical services such as those supporting emergency responders (and flights at the nation's busiest airport) were not affected.
It was a textbook example of a ransomware attack. And after years of such breaches around the globe, the city's response to internal auditors' dire warnings should have been textbook, as well. It clearly was not.
What happened? Why did Atlanta, even with ample warning, fail to implement recommended controls to harden its systems?
While the reasons are undoubtedly numerous and complex, everything points to an override of the three-lines-of-defense risk management model. The model requires management, the first line of defense, to own and manage risks by maintaining and executing effective internal controls — including corrective actions that internal audit identifies to address process and control deficiencies.
The second line encompasses risk and compliance functions. These vary according to industry, affecting the nature of their exact responsibilities. However, in general, they support the first line by helping to build or monitor the first line's controls.
The third line, as we know, is internal audit, which provides the governing body and senior management with comprehensive assurance on the effectiveness of governance, risk management, and internal controls, including recommendations for addressing vulnerabilities. It is an effective model, but only when all three lines play their part, and management listens to the third line.
In Atlanta's case, management failed in its responsibility to promptly address the recommendations made by internal audit, rendering the model ineffective. Granted, internal audit may sometimes contribute to the difficulty of management's role by failing to communicate the value or importance of a recommendation, prioritize reports according to the most critical risks, or gain management's buy-in early in the process.
But that's why we in the internal audit profession must do what is necessary to make it easy — nearly unavoidable — for management to understand the magnitude of such risks, acknowledge our recommendations, and set those recommendations in motion.
An organization's ability to act on what it knows becomes even more important as the frequency and impact of cyberattacks continue to rise. Not long after Atlanta's ransomware attack, we learned that a data breach of Under Armour's MyFitnessPal app compromised potentially 150 million accounts. Phishing emails are widely recognized as a common delivery vehicle for viruses, yet some companies fail to educate employees on what to look for and how to respond to a suspicious message.
Many companies know that hackers incessantly find and exploit software vulnerabilities, for which the software's developers issue patches as soon as each new "crack" is discovered. Still, patches often go unapplied. Passwords, too, are known to be an open door to data breaches (according to Verizon's 2017 Data Breach Investigations Report, 80 percent of hacking-related breaches leveraged stolen, weak, or guessable passwords), yet some organizations fail to establish a policy requiring strong passwords, changed frequently.
It is easy to assume that hacking is something that happens to someone else: "We are too small to attract the attention of hackers." "We don't have any information worth stealing." But that is classic "head in the sand" thinking. Virtually any organization that has data is at risk, which means every organization is at risk, short of one on a small, isolated island that has found a way to stay off the grid.
Almost two years ago, following an even bigger cyberattack that hit computer networks worldwide, I penned a blog post titled "Does Your Organization's Cyber Culture Make You #Wannaaudit?" I wrote then: "It is unfathomable to me that such attacks continue to succeed." These days, the prospect of a cyberattack should be continuously on our radar, and internal audit's recommendations when vulnerabilities are found should be heard and given immediate attention.
As organizations continue to suffer the consequences of cyberattacks, the least we can do is to take some lessons from their experience:
Organizations face enough risks for which they have no warning or defense mechanisms. Cybercrime need not be one of them.
As always, I look forward to your comments.