Cyber Guidance Overload Guidance Overload<p>​In addressing cyber risks, internal audit departments need to leverage industry frameworks to perform audits in line with current practices. However, the constant release of new cybersecurity frameworks and guidance makes it difficult for auditors to keep up with developments and ensure they are auditing against the latest frameworks. </p><p>Although cybersecurity has become a top risk for boards of directors and audit committees, organizations worldwide do not follow a common comprehensive framework. Instead, guidance organizations such as the Committee on Payments and Market Infrastructures (CPMI), International Organization for Standardization, U.S. Federal Financial Institutions Examination Council (FFIEC), and U.S. National Institute of Standards and Technology (NIST) have released separate cybersecurity frameworks. </p><p>These frameworks contain many of the same concepts. Some frameworks go beyond those basics to detail maturity levels that organizations can measure themselves against to see whether they are meeting the framework's target cybersecurity objectives. By evaluating each framework and selecting the one that best fits the organization's strategic vision, culture, and security posture, internal audit departments can assess the right risks and provide effective assurance on their organization's state of cybersecurity.</p><h2>Which Framework? </h2><p>One of the first steps during a cybersecurity audit is determining which framework to use and the level of granularity internal audit is willing to go to within the framework. For example, each framework has high-level domains that consist of several lower-level components, requirements, or assessment factors. The level of granularity internal audit chooses should depend on factors such as the organization's risk tolerance and regulatory expectations. </p><table cellspacing="0" width="100%" class="ms-rteTable-default"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p>​​<strong>Sample Cybersecurity Fram​eworks</strong><br></p><p> <a href="/2018/Documents/FFIEC-Cybersecurity-Assessment-Summary.pdf" target="_blank"><span class="ms-rteForeColor-8">Click here</span></a> to view how the FFIEC Cybersecurity Assessment can be used to measure cybersecurity maturity.</p><p> <a href="/2018/Documents/NIST-Cybersecurity-Categories.pdf" target="_blank"><span class="ms-rteForeColor-8">Click here</span></a><span class="ms-rteForeColor-8">​</span> to view how the NIST Cybersecurity Framework can be used to measure cybersecurity maturity.</p></td></tr></tbody></table><p>Before selecting a framework, internal audit must determine whether it wants to give management a checklist of compliance results or it wants to present a report on the maturity of management's processes. Similar to a compliance audit, internal audit can use frameworks such as the one issued by the CPMI to determine whether the organization's cybersecurity measures meet the framework's requirements. On the other hand, frameworks issued by the FFIEC and NIST have maturity levels or benchmarks that need to be assessed more judgmentally (see "Sample Cybersecurity Frameworks," right). These frameworks reflect a progression from informal responses to innovative responses to determine how well risk-informed decisions are being managed. The decision to report on compliance or maturity will drive the overall cybersecurity audit plan. </p><p>In assessing the various frameworks, internal audit should use a risk-based approach to determine its audit scope. Not every requirement or assessment factor may be applicable for the organization. Current risk management practices, the threat landscape, legal and regulatory requirements, and organizational challenges should play a part in internal audit's assessment. However, when building its<span style="text-decoration:underline;"> </span>audit plan and scope, internal audit should ensure anything that is out of scope is documented so the department can justify its approach to senior management and other stakeholders. This practice will help certify that audit coverage is complete and right for the organization. </p><h2>Applying the Framework</h2><p>The framework internal audit selects will provide the guidance necessary to ask management the appropriate questions. It also can lead to greater understanding of how IT security teams are managing technology risks, including risks from new technologies. </p><p>Conducting walkthroughs with the IT and security functions' management will help auditors understand the controls that mitigate the organization's risks. Mapping these controls to the cybersecurity framework can ensure internal audit coverage is complete and considers the various locations, tools, and centralized vs. decentralized processes. Once internal audit has identified the organization's cybersecurity controls, the mapping exercise will document that the audit scope is complete and thorough. It also can provide evidence that internal audit understands the organization's security environment. </p><p>The next step is establishing internal audit's testing strategies. An inherent risk within every audit is that tests will not identify the material issues that may exist in the control environment. To mitigate this risk, auditors should ensure the test objectives detailed in the industry framework are tied into their audit program. If internal audit is leveraging a framework that has specific requirements, it can develop testing strategies to ascertain whether the current controls are meeting these requirements. If the purpose of the audit is to assess the organization's level of cybersecurity maturity, testing strategies will need to incorporate the framework's various maturity components to determine the measurability and repeatability of the key controls. </p><p>The good news is the current cybersecurity frameworks have the necessary details to help drive these assessments. In certain instances, internal auditors will need to judge whether the correct ratings are being reported. In an organization with strict risk and control requirements, management may find it more meaningful for internal audit to assess the maturity level of the security organization and identify any potential security gaps. This can determine whether the organization is meeting its cybersecurity goals. </p><p>Organizations that have recently implemented a more formal security department can use a framework that has specific requirements to develop a benchmark for the new function. This benchmark can help the organization begin meeting the baseline maturities of the other frameworks before internal audit performs a detailed maturity assessment.</p><h2>Validating Cyber Controls</h2><p>Basing their internal audit work on a cybersecurity framework can enable internal auditors to understand their organization's security landscape and validate that appropriate controls are in place to protect the organization. Moreover, it can enable regulators to leverage internal audit's knowledge and workpapers in assessing whether the organization complies with cybersecurity regulations.</p><p>After reviewing different frameworks, internal auditors can identify new cybersecurity requirements and explain in detailed steps how the organization can reach a higher level of cybersecurity maturity. Additionally, by performing an extensive cybersecurity review, auditors can have more meaningful conversations with senior management in the audit, information security, and IT functions to address cybersecurity risks and controls.​</p>Daniel Pokidaylo0
Internal Auditors: More Than Cybersecurity Police Auditors: More Than Cybersecurity Police<p>​​New guidance announced by the U.S. Securities and Exchange Commission last week is raising the bar on how publicly traded companies report on their handling of one of the top challenges facing every organization — cybersecurity.</p><p>The new cyber-risk guidance, an evolution of guidance first released by the regulator in 2011, boosts reporting requirements in various ways, from disclosures about board involvement in cyber-risk oversight to enhancing internal reporting procedures that more effectively determine when cyber issues rise to the level of materiality and, therefore, should be reported publicly. The new guidelines inevitably will create new compliance challenges and, with that, additional need for internal audit to provide assurance on those compliance efforts.</p><p>The new U.S. rules, along with the upcoming deadline to meet strict European Union guidelines on data protection, are high-profile examples of where internal audit can provide important assurance on information technology (IT). </p><p>But it is important, indeed crucial, for organizations to understand that management of cyber risks and data protection are only part of the overall IT governance picture and that internal audit can and should play a larger role than simply acting as the cybersecurity police.</p><p>A recently published IIA <a href="">Global Technology Audit Guide (GTAG)</a> provides direction and insight on internal audit's approach to auditing IT governance. The GTAG's executive summary captures the benefits of strong IT governance and describes how proper IT governance can help organizations achieve their goals.</p><p>From the GTAG executive summary:</p><p><span class="ms-rteStyle-BQ">"Effective IT governance contributes to control efficiency and effectiveness​​​​​, and allows the organization's investment in IT to realize both financial and nonfinancial benefits. Often when controls are poorly designed or deficient, a root cause is weak or ineffective IT governance." </span></p><p>The benefits of effective IT governance are significant. In addition to aligning IT strategies with organizational objectives, it helps identify and properly manage risks; optimizes IT investments to deliver value; defines, measures, and reports on IT performance using meaningful metrics; and helps manage IT resources.</p><p>Sound IT governance helps organizations address IT challenges, such as the growing complexity of IT environments, growing use of data to make business decisions, and, as previously discussed, the growing number of laws and regulations associated with the threat of cyberattacks.</p><p>As with all governance issues, internal audit is uniquely positioned to give management and the board a clear-eyed assessment on the effectiveness and efficiency of the processes and structures that make up IT governance.</p><p>The GTAG provides valuable insights on how responsibilities of multiple governance structures within the organization can overlap. For example, corporate governance oversees conformance processes and is involved in compliance and business governance oversees performance processes.</p><p>The key is for internal audit to examine — and to help management and the board understand — the interplay among all three governance structures and not view IT governance as somehow separate and apart. A key message from the GTAG captures this well:</p><p><span class="ms-rteStyle-BQ">"Alignment of organizational objectives and IT is more about governance and less about technology. Governance assures alternatives are evaluated, execution is appropriately directed, and risk and performance are monitored."</span></p><p>The GTAG provides internal auditors the tools and techniques to build work programs and perform engagements involving IT governance. These include a step-by-step description of engagement planning, from understanding the context and purpose of the engagement to reporting results. Additionally, five appendices provide related IIA standards and guidance, a glossary of key terms, a sample internal controls questionnaire, a risk and controls matrix, and a list of additional resources.</p><p>It is important to emphasize that having a well-developed IT governance audit program in place will help integrate IT into the overall governance strategy and take the mystery out of IT, which often contributes to poor IT controls. It also will help position organizations to respond quickly and efficiently to changes in regulations or IT-related risks.</p><p>The current scramble to meet upcoming European Union rules on data protection suggest that not enough organizations are taking a comprehensive approach to IT governance. Indeed, those troubles were clearly reflected in an August survey by DocsCorp, reported in <a href="">The Current State of GDPR Readiness</a>. The survey found 43 percent of respondents from Europe and the United Kingdom identified financial penalties for noncompliance as their biggest concern with the new rules. In Canada and the United States, the survey found 73 percent of respondents had yet to start preparing for the new rules and 54 percent were unaware of the May 25 compliance deadline.</p><p>I encourage every chief audit executive to download and review the new GTAG and discuss IT governance with their management and boards. Providing an accurate and unbiased assessment of how IT operates within the organization is another example of where internal audit can add value and help organizations achieve their goals.</p><p>As always, I look forward to your comments.​</p>Richard Chambers0
The Runaway Threat of Identity Fraud Runaway Threat of Identity Fraud<p>​​​​Just a reminder: The European Union's Global Data Protection Regulation (GDPR) takes effect on May 25. The new regulation ​enacts strict rules requiring organizations to protect consumer data, and it applies to any organization worldwide that gathers data on EU consumers. The aim is to protect the privacy of consumers and to combat identity theft and fraud.</p><p>Now here's another reminder: Identity fraud is getting worse. In the U.S., 16.7 million consumers were victims of identity fraud in 2017, up 8 percent from 2016, according to Javelin Strategy & Research's <a href="" target="_blank">2018 Identity Fraud Study</a>. That's one out of every 15 U.S. consumers. Javelin surveyed 5,000 U.S. adults for the study.</p><p>What's the bottom line for internal auditors and their organizations? It's time to get serious about protecting consumer data. </p><p>"2017 was a runaway year for fraudsters, and with the amount of valid information they have on consumers, their attacks are just getting more complex," says Al Pascual, senior vice president and research director at San Francisco-based Javelin.</p><p>The Javelin report makes a distinction between identity theft and identity fraud. Identity theft is unauthorized access to personal information, such as through a data breach. Identity fraud happens when that personal information is used for financial gain.</p><h2>A New Target</h2><p>The nature of identity theft and fraud shifted in 2017, the report notes. For the first time, more Social Security numbers were stolen than credit card numbers. Last year's massive Equifax hack was the most glaring example. Those Social Security numbers make it easy for criminals to open accounts in a victim's name or to take over their existing accounts. </p><p>Javelin says account takeover was one of two drivers of identity fraud last year, along with existing noncard fraud. Account takeover tripled, with $5.1 billion in losses, a 120 percent increase over 2016. This type of fraud is particularly costly for consumers, who spend on average $290 and 16 hours to resolve incidents.</p><p>Small wonder then that consumers "shift the perceived responsibility for preventing fraud from themselves to other entities, such as their financial institution or the companies storing their data," as Javelin's press release notes. Respondents rate security breaches at companies as the top identity-related threat, with 63 percent saying they are "very" or "extremely" concerned about such incidents. Nearly two-thirds of victims say breach notifications don't protect them and are just a way for organizations to avoid legal trouble. </p><h2>Going Online</h2><p>Another trend is identity fraud has moved online in response to the introduction of EMV chip cards in the U.S. Credit and bank cards with these chips make it harder for fraudsters to use stolen cards in person, but they still can be used online, where many people shop. Indeed, card-not-present fraud is 81 percent more likely than point-of-sale fraud, Javelin reports.</p><p>These frauds are becoming more sophisticated, too, according to Javelin. For example, fraudsters opened intermediary accounts in the names of 1.5 million victims of existing card frauds. Such accounts include email payment services such as PayPal or accounts with online merchants.</p><h2>Protecting Consumers</h2><p>Javelin's recommendations for preventing identity fraud focus more on what consumers can do to protect themselves, including:</p><ul><li>Using two-factor authentication.</li><li>Securing devices.</li><li>Putting a security freeze on credit reports to prevent accounts from being opened.</li><li>Signing up for account alerts.</li><li>Setting controls to prevent unauthorized online transactions.</li></ul><p> <br> </p><p>Such vigilance can help, but consumers expect financial institutions, retailers, and others they do business with to protect their information. Now they have a powerful ally in the GDPR, which puts responsibility squarely on businesses.</p><p>The GDPR requires organizations to provide a reasonable level of protection for personal data and mandates that they notify data protection authorities within 72 hours when consumer records have been breached. Compare that with some recent U.S. breaches in which several weeks passed between when the incident was discovered and the time when the organization disclosed it. </p><p>GDPR regulators can punish organizations that don't comply harshly. Fines can run up to 4 percent of an organization's annual turnover up to €20 million ($24.6 million). If protecting customers' personal data isn't a priority in itself, the potential financial penalties should raise the stakes for organizations.​</p><p> <br> </p>Tim McCollum0
The Rising Tide of Cyber Risks Rising Tide of Cyber Risks<p>​Large-scale cyberattacks rank third in likelihood among global risks identified by the World Economic Forum's <a href="" target="_blank" style="background-color:#ffffff;">Global Risk Report 2018</a>. Released this month ahead of the forum's annual gathering of world and business leaders in Davos, Switzerland, the survey report predicts a heightened global risk environment, with the tentacles of cyber threats factoring into business and geopolitical risks. Think cyberwarfare and attacks on major companies, banks, and markets.</p><p>"Geopolitical friction is contributing to a surge in the scale and sophistication of cyberattacks," says John Drzik, president of Global Risk and Digital with insurer Marsh, in a press release accompanying the report. That risk continues to grow for businesses, as well, even as they become more aware of cyber threats, Drzik points out. "While cyber risk management is improving, business and government need to invest far more in resilience efforts" to avoid protection gaps. </p><p>Dire warnings about cyber threats are pushing boards to reconsider their business plans. In EY's latest <a href="" target="_blank">Global Information Security Survey</a>, 56 percent of C-suite respondents say the increased impact of cyber threats and vulnerabilities has led their organization to change or plan to change business strategies. Only 4 percent say they have fully considered the secu​rity issues arising from their current strategy.</p><p>It's not the large-scale attacks envisioned by the World Economic Forum report that worry the nearly 1,200 respondents to the EY survey. It's the less sophisticated attackers that have targeted their organizations. "The most successful recent cyberattacks employed common methods that leveraged known vulnerabilities of organizations," says Paul van Kessel, cybersecurity leader for EY's Global Advisory. </p><p>Couple that with new technologies and increased connectivity, and organizations are facing more vulnerabilities than before, he notes. As they look to transform their businesses, organizations need to assess their digital environment "from every angle to protect their businesses today, tomorrow, and far into the future," he says.</p><h2>A Question of Money</h2><p>Executives clearly see a need for more resources to face cyber threats. Although 59 percent of respondents say their cybersecurity budgets increased in 2017, 87 percent say they need to allocate as much as 50 percent more. Twelve percent expect more than a 25 percent increase this year. </p><p>For many organizations, it might take a major breach for them to make significant cybersecurity investments, respondents report. Three-fourths of respondents say an incident that caused damage would result in a higher cybersecurity outlay. Conversely, nearly two-thirds say a less damaging attack would not lead to an increase.</p><h2>Three Levels of Attack</h2><p>Budgets aside, respondents acknowledge the vulnerabilities and threats are rising. Chief among the vulnerabilities are employees who aren't following good cybersecurity practices. Malware and phishing far outpace other threats. </p><p>In the face of increased threats, resilience may be the best way for organizations to fight back. "To get there, the organization needs to understand the relationship between cyber resilience and the objectives of the business, as well as the nature of the risks it is facing and the status of the current safeguards," the EY report says. "It must also assess how much risk it is prepared to take and define an acceptable loss."</p><p>To become more resilient, the EY report notes that organizations need to take steps to address three levels of attack: common, advanced, and emerging.</p><p> <strong>Common.</strong> Although the vast majority of attacks target known weaknesses, three-fourths of respondents say their organization's ability to identify vulnerabilities is immature or moderately mature. Twelve percent lack a breach detection program, and 35 percent say their data protection policies are ad hoc or don't exist. </p><p>To defend against common threats, EY proposes five components:</p><ul><li>Talent-centric, with everyone in the organization responsible for cybersecurity.</li><li>Strategic and innovative, with cybersecurity embedded into decision-making.</li><li>Risk-focused, with "well-governed risk alignment."</li><li>Intelligent and agile, to detect and respond to threats timely.</li><li>Resilient and scalable, to minimize disruptions and grow with the business.</li></ul><p> <strong><br></strong></p><p> <strong>Advanced.</strong> These sophisticated attacks target unknown or complex vulnerabilities, and are carried out by organized crime groups, cyber terrorists, and nation states. To respond to such attacks, the EY report recommends organizations centralize cybersecurity activities within a security operations center (SOC). This center should focus on protecting the organization's most valuable assets, defining normal operating conditions as a basis for identifying unusual activity, gathering threat intelligence, and carrying out "active defense" missions to identify hidden intruders.</p><p> <strong>Emerging.</strong> These unanticipated attacks are made possible by advancing technologies. Responding to them requires agility to imagine the attacks that could be possible and act quickly when they happen, the report notes. </p><h2>In Case of Emergency</h2><p>Beyond these measures, the EY report says organizations need a cyber breach response plan that automatically springs into action when an incident occurs. The cybersecurity function plays a part, but the plan also involves business continuity planning, compliance, insurance, legal, and public relations. This is an area where many respondents fall short. Nearly 70 percent have a formal incident response capacity, but problems arise when drilling down to specifics. </p><p>Communication is a glaring problem, with 43 percent saying their organization doesn't have a communication strategy to respond to attacks. Just 56 percent say they would notify the media within a month of an incident that compromised data. That could prove costly, with the European Union's Global Data Protection Regulation set to take effect in May. Organizations that fail to respond timely to data breaches could face tangible penalties beyond the damage caused by attacks. </p>Tim McCollum0
Fundamentals of a Cybersecurity Program of a Cybersecurity Program<p>​Recent major data breaches at Equifax and Deloitte are reminders of the dangers of failing to practice cybersecurity fundamentals. At Equifax, more than 143 million records were exposed, including names, addresses, Social Security numbers, and credit information. The Deloitte breach compromised hundreds of global clients' information.</p><p>Cybersecurity risk is not just an IT issue — it's a business and audit issue. Collectively, the advice information security and internal audit professionals provide to business leaders has never been more important. To partner in addressing today's cybersecurity challenges, audit and security leaders must start with a little common sense.</p><p>Take, for example, a homeowner. There are valuables in the home, so it's important that only trusted people have a copy of the house key. To be prudent, the homeowner should take an inventory of the items in the ho​me and estimate their value so he or she knows how much needs protecting and ensures items are stored securely. The homeowner also should make sure the smoke detectors are working and set up a security monitoring service with video surveillance so he or she can be alerted and react quickly to a potential fire or break-in. </p><p>Organizations need to exercise the same principles when assessing the digital risk to customer, employee, and other company information. Auditors and security professionals should prioritize three fundamentals to help make an information security program more impactful and effective. </p><h2>1. Improve Visibility</h2><p>How can organizations protect what they can't see? Identifying the valuables, or assets, within an organization is probably the most foundational aspect of a security program, and yet it continues to be a pain point. Technical solutions can help, with the right support and funding, but asset management is a process and a discipline, not just a tool. </p><p>Knowing the organization's assets and their value will inform what gets monitored and how. Security monitoring solutions are improving, with richer analytics and machine-learning capabilities as well as more expansive integration. Organizations should monitor their environments around the clock. For small and mid-size organizations that lack in-house resources for such monitoring, partnering with a trusted third party or managed security service provider is an option.</p><p>Another fundamental aspect of improving visibility and monitoring is to proactively look for existing weaknesses or vulnerabilities and patch them. Failure to patch systems with the Apache Struts vulnerability led to the Equifax data breach. The vulnerability allows command injection attacks to occur because of incorrect exception handling. As a result, an unauthorized user can gain privileged user access to a web server and execute remote commands against it. This vulnerability could have been addressed by standardizing and increasing the frequency of scanning and patch cycles.</p><p>Security and audit teams can work together to ensure the right risks are being mitigated and help their business partners think about risk rather than checking off a compliance requirement. They also can partner on implementing a repeatable risk assessment process. This is no longer just a best practice or standard. It is now a matter of compliance with regulations such as the European Union General Data Protection Regulation and the New York Department of Financial Services CR500.</p><h2>2. Improve Resiliency</h2><p>Is the organization prepared to handle the inevitable and how well can it recover? Improving visibility and being notified of threats and incidents is great, but an inappropriate or untimely response can incur a much greater cost. The organization's<strong><em> </em></strong>ability to quickly diagnose, contain, and recover from a potential or actual data breach or privacy incident directly impacts business operations and the cost to the organization. A well-planned and tested incident response plan can reduce the overall impact and cost of the incident. </p><p>Rapid response is a must with many global and U.S. state data breach notification laws having aggressive notification time lines. One of the ways in which internal audit and information security functions can increase the speed of their investigations and response times is maintaining a good asset- management process. </p><p>Maintaining a state of preparedness is more than having a document or periodically testing the plan. It's about having a good team of people from the right areas of the organization. Security and audit teams can partner to ensure that the incident response plan has all the necessary elements in place and ensure it is being followed. Responding to a crisis requires people to work together in a way that they normally do not work, which requires building and maintaining good relationships.</p><h2>3. Improve Sensitivity</h2><p>Do the organization's employees and associates understand what is at stake with cybersecurity? Increasing sensitivity to cyber risks needs to be tied to personal relevance, because people respond better when it impacts them directly. </p><p>Recall the homeowner analogy. For some people, it may be easy to get too comfortable within their neighborhood and become desensitized to potential risks of home thefts to the point of forgetting to lock doors and windows. Or they may become too liberal about who has a copy of their house key and what they do with it. There are lessons here for employees that should prompt their response.</p><p>Social engineering, including phishing simulations and physical security, must be a regular and primary aspect of cyber risk sensitivity training programs. Phishing attacks aimed at stealing user login credentials cause most reported data breaches. These types of attacks can be thwarted through a more expansive use of multi-factor authentication, which is a combination of something the person knows, such as a password or PIN number, along with something the person has, such as a token or smartphone. Technical controls can be effective, but they also must be accompanied by user education. As a training method, phishing simulations confirm what internal auditors and security professionals already know: There is never going to be a 0 percent click rate. However, they provide an opportunity to reiterate training content.</p><h2>Practicing Security Basics</h2><p>Shortly after the 2014 Sony hack, former President Barack Obama compared cybersecurity to a basketball game, "in the sense that there's no clear line between offense and defense. Things are going back and forth all the time." There is some truth to that. </p><p>In basketball, teams often lose because they overreact to a new play and forget the fundamentals. Coaches usually react by having teams practice basics such as passing, layups, and free throws. Similarly, organizations all have various priorities, and many of them are competing. Sometimes when it appears organizations are getting beaten by cyber risks, they need to revisit the fundamentals such as visibility, resiliency, and sensitivity. Auditors can partner with chief information security officers in this effort to ensure that the program is taking a balanced, risk-based, and business-oriented approach. ​</p>Jon West1
Innovations on the Horizon on the Horizon<p>​Organizations are beginning to look at emerging technologies more holistically, with an eye toward coordinating them in pursuit of objectives, according to Deloitte's <a href="" target="_blank">Tech Trends 2018​</a> report. These organizations aren't thinking of big data, the cloud, and other disruptive technologies as separate domains. They are looking at how the technologies can complement each other, the report finds.</p><p>They also are pushing responsibility for technology up the corporate ladder, from chief information and technology officers all the way to the CEO and board. "We now see many forward-thinking organizations approach disruptive change more strategically," says Bill Briggs, chief technology officer at Deloitte Consulting LLP. "Increasingly, they are focusing on how multiple disruptive technologies can work together to drive meaningful and measurable impact across the enterprise." Tech Trends 2018 identifies eight trends that may drive organizations over the next two years.</p><h2>Reengineering Technology</h2><p>After many years of using IT to reengineer the organization, IT departments need to reengineer themselves, the report states. Bottom-up change should focus on modernizing the organization's underlying IT infrastructure through automation, and by repaying "technical debt" accrued from software design, physical infrastructure and systems, and maintaining legacy systems. Top-down reengineering should focus on building a new operating model for the IT function that breaks down silos and establishes multi-skill teams aimed at delivering specific outcomes. Rather than seeking funding for specific IT needs, the report recommends IT functions budget in a way that applies resources to support strategic goals. </p><h2>No-collar Workforce</h2><p>Forget white collar and blue collar. The workforce of the future will combine people and machines working together, the report predicts. "As automation, cognitive technologies, and artificial intelligence gain traction, companies may need to reinvent worker roles, assigning some to humans, others to machines, and still others to a hybrid model in which technology augments human performance," the report states. The good news is automation probably will not displace most workers. Instead, people and machines each will bring specialized abilities to the equation. Organizations will need to redesign jobs and reimagine how work gets done, the report notes. </p><h2>Enterprise Data Sovereignty</h2><p>Increasingly, organizations want to make information accessible across business units, departments, and locations, the report finds. Within the next two years, many organizations will modernize their data management approach in a way that balances the need for control and accessibility. Setting data "free" will take more "modern approaches to data architecture and data governance" in making decisions about data storage, usage rights, and understanding relationships among data, the report notes. Moreover, organizations will need to address data issues in three areas: management and architecture, global regulatory compliance, and data ownership.</p><h2>The New Core</h2><p>Discussions of disruptive technologies often overlook how technology can "fundamentally change the way work gets done" in an organization's back-office operations and systems, such as finance and the supply chain, the report states. Organizations have much to gain from connecting front-office systems to back-office operations that support pricing, product availability, logistics, and financial information. Over the next two years, the report predicts organizations will build a new core that incorporates automation, analytics, and interconnections with systems and processes. Instead of seeking tools to address specific tasks, organizations will look for technologies that can support complex operating networks and new ways of working, the report says.</p><h2>Digital Reality</h2><p>The report notes that organizations implementing technologies such as augmented reality, virtual reality, and immersive technology are starting to move beyond experimentation to focus on building mission-critical applications for the workplace. It suggests three design breakthroughs that may accelerate digital reality: </p><ul><li>Transparent interfaces that allow users to interact with data, software applications, and their surroundings. </li><li>Wearable augmented reality/virtual reality gear that gives people "ubiquitous access" to the internet and organizational networks. </li><li>Contextual filters that enable users to adapt their level of engagement in virtual environments — like a virtual reality mute button.</li></ul><h2>Blockchains to Blockchains</h2><p>Although many organizations are testing the waters, the report urges them to start standardizing on the technology, people, and platforms needed to build blockchain initiatives. The report predicts organizations will go from initial use cases to fully deploying production solutions, with a focus on applications that can be commercialized. It also expects organizations to integrate multiple blockchains within their value chain.</p><h2>API Imperative</h2><p>Application programming interfaces (APIs) traditionally have been an IT concern, but the report notes they are becoming a business matter. While APIs enable systems to interact, many businesses want to use them to make technology assets available for reuse enterprisewide. The ability to build and reuse APIs "is key to achieving business agility, unlocking new value in existing assets, and accelerating the process of delivering new ideas to the market," the report says. To do so, organizations need to find ways to make APIs known throughout the organization, and manage and control them. </p><h2>Exponential Technology Watch List</h2><p>The previous trends focus on technologies that are moving into the mainstream, but the report's final trend looks forward to future innovations and their potential impact on organizations. These "exponentials" may emerge at different times, with some coming within the next five years and others likely to take longer to arrive. That doesn't mean organizations should wait to plan for new innovations. Indeed, without the capabilities, processes, and structures needed to innovate, organizations may risk missing out on opportunities that could bring "transformative outcomes," the report concludes.</p><p><br></p>Tim McCollum0
The Robots Are Coming ... for My Family Robots Are Coming ... for My Family<p>​My husband and I had lunch with our 19-year-old college sophomore last weekend. He's majoring in IT. I tried to persuade him to take a look at artificial intelligence (AI) as a career option. After all, it will likely be taking over his family's jobs — and we'll need him to support us. </p><p>You see, his dad is an accountant, one of "The Five Jobs Robots Will Take First," according to <em>AdAge</em> magazine. "Robo-accounting is in its infancy," the article explains, "but it's awesome at dealing with accounts payable and receivable, inventory control, auditing, and several other accounting functions that humans used to be needed to do."</p><p>Another of the top five jobs robots will take according to <em>AdAge</em>? His mother's. Given the fact that, last year, IBM and marketing company The Drum announced that Watson, IBM's AI tool, edited an entire magazine on its own, my days in publishing may, indeed, be numbered. </p><p>And, finally, there's his sister. She plans to follow in the footsteps of a long line of teachers in our family — unfortunately, it may be the end of the line. IBM's Teacher Advisor With Watson "is loaded with the lesson plans and proven strategies [needed] to teach across a variety of elementary grade levels and student abilities," reports 3BL Media. "And because it's cognitive, Teacher Advisor will get smarter — and better — with training and use." </p><p>According to Harnessing Automation for a Future That Works, a McKinsey Global Institute Report, "almost every occupation has partial automation potential." The report estimates that about half of all the activities employees are paid to do in the world's workforce could be automated by adapting current technologies. </p><p>The good news, according to McKinsey, is that less than 5 percent of occupations are candidates for <em>full</em> automation. Take internal auditing, for example. In this month's cover story, <a href="/2017/Pages/Audit-in-an-Age-of-Intelligent-Machines.aspx">"Audit in an Age of Intelligent Machines,"</a> David Schubmehl, research director for Cognitive/AI Systems at IDC, says "There's going to be tremendous growth in AI-based auditing, looking at risk and bias, looking at data."</p><p>So maybe there's hope after all. Maybe these technologies will just supplement and enhance our jobs. Maybe they will even make us more productive. Maybe my family and the pugs won't have to move in with my son.</p><p>While I'm still the editor, I'd like to welcome Kayla Flanders, senior audit manager at Pella Corp., who joins us as the new contributing editor of "Governance Perspectives." A big thank you to Mark Brinkley for his years serving in that position. And, finally, we will be saying goodbye to's "Marks on Governance" blog at the end of December. Norman Marks' contributions to the magazine have been invaluable. In addition to his blog, he has served as a contributing editor and written numerous articles throughout the years. Norman also was a member of The IIA's Publications Advisory Committee and continues to serve on the magazine's Editorial Advisory Board. We look forward to continued collaborations.</p>Anne Millage0
Audit in an Age of Intelligent Machines in an Age of Intelligent Machines<p>​While monitoring transactions, an alert bank data analyst noticed unusual payments from a computer manufacturer to a casino. Because casinos are heavily computerized, one would expect the payments to go to the computer company. The analyst alerted an investigative agent, who rapidly scoured websites, proprietary data stores, and dark web sources to find detailed information about the two parties. The data revealed that the computer manufacturer was facing a criminal indictment and a civil law suit. Meanwhile, the casino had lost its gambling license due to money laundering and had set up shop in another country. Further investigation revealed the computer manufacturer was using the casino to launder money before the company’s legal issues drove it out of business.</p><p>The bank’s data analyst was a machine learning algorithm. The investigative agent was an artificial intelligence (AI) agent.</p><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;">​To learn more about internal audit's role in AI, download The IIA's <a href="" target="_blank">Artificial Intelligence: Considerations for the Profession of Internal Auditing.</a><br></td></tr></tbody></table><p>AI is all around. It’s monitoring financial transactions. It’s diagnosing illnesses, often more accurately than doctors. It’s carrying out stock trades, screening job applicants, recommending products and services, and telling people what to watch on TV. It’s in their phones and soon it will be driving their cars. </p><p>And it’s coming to organizations, maybe sooner than people realize. Research firm International Data Corp. says worldwide spending on cognitive and AI systems will be $12 billion this year. It predicts spending will top $57 billion by 2021.</p><p>“If you think AI is not coming your way, it’s probably coming sooner than you think it is,” says Yulia Gurman, director of internal audit and corporate security for the Packaging Corporation of America in Lake Forest, Ill. Fresh off of attending a chief audit executive roundtable about AI, Gurman says AI wouldn’t have been on the agenda a year ago. Like most of her peers present, she hasn’t had to address AI within her organization yet. Now it’s on her risk assessment radar. “Internal auditors should be alerting the board about what’s coming their way,” she says.</p><h2>The Learning Algorithm</h2><p>Intelligent technology has already found a place on everyday devices. That personal assistant on the kitchen counter or on the phone is an AI. Alexa, Cortana, and Siri can find all sorts of information for people, and they can talk to other machines such as alarm systems, climate control, and cleaning robots.</p><p>Yet, most people don’t realize they are interacting with AI. Nearly two-thirds of respondents to a recent survey by software company Pegasystems say they have not or aren’t sure they have interacted with AI. But questions about the technologies they use — such as personal assistants, email spam filters, predictive search terms, recommended news on Facebook, and online shopping recommendations — reveal that 84 percent are interacting with AI, according to the What Consumers Really Think About AI report. </p><p>What makes AI possible is today’s massive availability of data and computing power, as well as significant advances in the quality of the machine learning algorithms that make AI applications possible, says Pedro Domingos, a professor of computer science at the University of Washington in Seattle and author of The Master Algorithm. When AI researchers like Domingos talk about the technology, they often are referring to machine learning. Unlike other computer applications that must be written step-by-step by people, machine learning algorithms are designed to program themselves. The algorithm does this by analyzing huge amounts of data, learning about that data, and building a predictive model based on what it’s learned. For example, the algorithm can build a model to predict the risk that a person will default on his or her credit card based on various factors about the individual, as well as historical factors that lead to default. </p><table class="ms-rteTable-4" width="100%" cellspacing="0"><tbody><tr class="ms-rteTableEvenRow-4"><td class="ms-rteTableEvenCol-4" style="width:100%;">​ <style> div.WordSection1 { } </style> <p> <strong>Alexa, Are You Monitoring Me?</strong><br></p><p>Between them, the world’s e-commerce, social media, and technology companies are getting to know people very well. Amazon knows their shopping habits, Apple and Google know what they search for and what questions they ask, Facebook knows what engages them online, and Netflix knows what they watch on TV.</p><p>Artificial intelligence researcher Pedro Domingos says the companies using personalization algorithms are getting to the point where they could build a good model of each of their customers. But if the data they had on those people were consolidated in one place, it would enable an algorithm to build a comprehensive model of each person. Domingos calls this the personal data model.</p><p>Imagine an AI algorithm that worked on your behalf, he says — search for your next car, apply for jobs, and even find you a date. “The big technology companies are in a competition to see who can do this better,” he says. “This is something that we’re going to see pick up steam in the next several years.”</p><p>Whether that is a good thing or a bad thing may depend on who controls that data. That’s something that worries John C. Havens, executive director of the IEEE Global AI Ethics Initiative. He says the misunderstanding and misuse of personal data is AI’s biggest risk. Despite the benefits of personalization, “most people don’t understand the depth of how their data is used by second and third parties,” he notes. </p><p>Havens says there’s a need to reorient that approach now to put people at the center of their data. Such an approach would allow people to gather copies of their data in a personal data cloud tied to an identity source, and set terms and conditions for how their data can be used. “People can still be tracked and get all the benefits,” Havens explains. “But then they also get to say, ‘These are my values and ethics, and this is how I’m willing to share my data.’ It doesn’t mean the seller will always agree, but it puts the symmetry back into the relationship.”</p><p>Similarly, Domingos sees an opportunity for a new kind of business that could safeguard a personal data model in the same way that a bank protects someone’s money and uses it on the person’s behalf. “It would need to have an actual commitment to your privacy and to always work in your best interest,” he says. “And it has to have an absolute commitment to ensuring it is secure.”</p></td></tr></tbody></table><h2>Driven by Data</h2><p>Using AI to make predictions takes huge amounts of data. But data isn’t just the fuel for AI, it’s also the killer application. In recent years, organizations have been trying to harness the power of big data. The problem is there’s too much data for people and existing data mining tools to analyze quickly. </p><p>That is among the reasons why data-driven businesses are turning to AI. Five industries — banking, retail, discrete manufacturing, health care, and process manufacturing — will each spend more than $1 billion on AI this year and are forecast to account for nearly 55 percent of worldwide AI spending by 2021, according to IDC’s latest Worldwide Semiannual Cognitive Artificial Intelligence Systems Spending Guide. What these industries have in common is lots of good data, says David Schubmehl, research director, Cognitive/AI Systems, at IDC. “If you don’t have the data, you can’t build an AI application,” he explains. “Owning the right kind of data is what makes these uses possible.”</p><p>Retail and financial services are leading the way with AI. In retail, Amazon’s AI-based product recommendation solutions have pushed other traditional and online retailers like Macy’s and Wal-Mart Stores Inc. to follow suit. But it’s not just the retailers themselves that are driving product recommendations, Schubmehl says. Image recognition AI apps can enable people to take a picture of a product they saw on Facebook or Pinterest and search for that product — or something similar and less expensive. “It’s a huge opportunity in the marketplace,” he says.</p><p>Meanwhile, banks and financial service firms are using AI for customer care and recommendation systems for financial advice and products. Fraud investigation is a big focus. “The idea of using machine learning and deep learning to connect the dots is something that is very helpful to organizations that have traditionally relied on experienced investigators to have that ‘aha moment,’” Schubmehl says.</p><p>That’s what happened with the casino and the computer manufacturer. “The way AI works in that scenario is to say, ‘Something is different. Let’s bring it back to the central brain and analyze whether this is risky or not risky,’” says David McLaughlin, CEO and founder of AI software company QuantaVerse, based in Wayne, Pa. “The technology is never going to accuse somebody of a crime or a regulatory violation. What it’s going to do is allow the people who need to make that determination focus in the right areas.”</p><p>Currently, IDC says automated customer service agents and health-care diagnostic and treatment systems are the applications where organizations are investing the most. Some of the AI uses expected to rise the most over the next few years are intelligent processing automation, expert shopping advisors, and public safety and emergency response. </p><p>Regardless of the use, Schubmehl says it’s the business units that are pushing organizations to adopt AI to advance their business and deal with potential disrupters. Because of the computing power needed, most industries are turning to cloud vendors, some of whom may also be able to help build machine learning algorithms.</p><h2>Is AI Something to Fear?</h2><p>Despite its potential, there is much fear about the risks that AI poses to both businesses and society at large. Some worry that machines will become too smart or get out of control.</p><p>There have been some well-publicized problems. Microsoft developed an AI chatbot, Clippy, that after interacting with people, started using insulting and racist language and had to be shut down. More recently, Facebook shut down an experimental AI system after its chatbots started communicating with each other in their own language, in violation of their programming. In the financial sector, two recent stock market “flash crashes” were attributed to AI applications with unintended consequences.</p><p>Respondents to the World Economic Forum’s (WEF’s) 2017 Global Risks Perception Survey rated AI highest in potential negative consequences among 12 emerging technologies. Specifically, AI ranked highest among technologies in economic, geopolitical, and technological risk, and ranked third in societal risk, according to the WEF’s Global Risks Report 2017. </p><p> <strong>Employment</strong> One of the biggest concerns is whether AI might eliminate many jobs and what that might mean to people both economically and personally. Take truck driving, the world’s most common profession. More than 3 million people in the U.S. earn their living driving trucks and vans. Consulting firm McKinsey predicts that one-third of commercial trucks will be replaced by self-driving vehicles by 2025.<br></p><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;">​<strong>The Jobs Question</strong><br>By now, internal auditors may be asking themselves, “Is AI going to take my job?” After all, an Oxford University study rated accountants and auditors among the professionals most vulnerable to automation. Of course, internal auditors aren’t accountants. But are their jobs safe?<br><br>Actually, AI may be an opportunity, says IDC’s David Schubmehl. He says many of the manual processes internal auditors review are going to be automated. Auditors will need to check how machine learning algorithms are derived and validate the data on which they are based. And, they’ll need to help senior executives understand AI-related risks. “There’s going to be tremendous growth in AI-based auditing, looking at risk and bias, looking at data,” Schubmehl explains. “Auditors will help identify and certify that machine learning and AI applications are being fair.”<br><br>Using AI to automate business processes will create new risks for auditors to address, says Deloitte & Touche LLP’s Will Bible. He likens it to when organizations began to deploy enterprise resource planning systems, which shifted some auditors’ focus from reviewing documents to auditing system controls. “I don’t foresee an end to the audit profession because of AI,” he says. “But as digital transformation occurs, I see the audit profession re-evaluating the risks that are relevant to the audit.”</td></tr></tbody></table><p>According to the Pew Research Center’s recent U.S.-based Automation in Everyday Life survey, 72 percent of respondents are worried about robots doing human jobs. But only 30 percent think their own job could be replaced (see “The Jobs Question” at right). That may be wishful thinking. “However long it takes, there’s not going to be any vertical industry where there’s not the opportunity to automate humans out of a job,” says John C. Havens, executive director of the IEEE Global AI Ethics Initiative. He says that will be the case as long as businesses are measured primarily by their ability to meet financial targets. “The bigger question is not AI. It’s economics.” </p><p> <strong>Ethics</strong> With organizations racing to develop AI, there is concern that human values will be lost along the way. Havens and the IEEE AI Ethics Initiative are advocating for putting applied ethics at the front end of AI development work. Consider the emotional factors of children or elderly persons who come to think of a companion robot in the same way they would a person or animal. And who would be accountable in an accident involving a self-driving car — the vehicle or the person riding in it?<br></p><p>“The phrase we use is ‘ethics is the new green,’” Havens explains, likening AI ethics to the corporate responsibility world. “When you address these very human aspects of emotion and agency early on — much earlier than they are addressed now — then you build systems that are more aligned to people’s values. You avoid negative unintended consequences and you identify more positive opportunities for innovation.”</p><p> <strong>Privacy and Security</strong> Using AI to gather data poses privacy risks for both individuals and businesses. All those personal assistant requests, product recommendations, and customer service interactions are gathering data on people — data that organizations eventually could use to build a comprehensive model about their customers. Organizations using personalization agents must walk a fine line. “You want to personalize something to the point where you can get the purchase offer,” Schubmehl says, “but you don’t want to personalize it so much that they say, ‘This is really creepy and knows stuff about me that I don’t want it to know.’”<br></p><p>All that data creates a compliance obligation for organizations, as well. And it is also valuable to cyber attackers.</p><p> <strong>Output</strong> Although AI has potential to help organizations make decisions more quickly, organizations need to determine whether they can trust the AI model’s recommendations and predictions. That all depends on the reliability of the data, Domingos says. If the data isn’t reliable or it’s biased, then the model won’t be reliable either. Moreover, machine learning algorithms can overinterpret data or interpret it incorrectly. “They can show patterns,” he points out. “But there are other patterns that would do equally well at explaining what you are seeing.”<br></p><p> <strong>Control</strong> If machine learning algorithms become too smart, can they be controlled? Domingos says there are ways to control machine learning algorithms, most notably by raising or lowering their ability to fit the data such as through limiting the amount of computation, using statistical significance tests, and penalizing the complexity of the model. <br></p><p>He says one big misconception about AI is that algorithms are smarter than they actually are. “Machine learning systems are not very smart when they are making important decisions,” he says. Because they lack common sense, they can make mistakes that people can’t make. And it’s difficult to know from looking at the model where the potential for error is. His solution is making algorithms more transparent and making them smarter. “The risk is not from malevolence. It’s from incompetence,” he says. “To reduce the risk from AI, what we need to do is make the computer smarter. The big risk is dumb computers doing dumb things.”</p><p> <strong>Knowledge</strong> Domingos says concerns about AI’s competence apply as well to the people who are charged with putting it to use in businesses. He sees a large knowledge gap between academic researchers working on developing AI and the business employees building machine learning algorithms, who may not understand what it is they are doing. And he says, “Part of the problem is their bosses don’t understand it either.”<br></p><p> <strong>Governance</strong> That concern for governance is one area the WEF’s Global Risk Report questions — specifically, whether AI can be governed or regulated. Components of AI fall under various standards bodies: industrial robots by ISO standards, domestic robotics by product certification regulations, and in some cases the data used for machine learning by data governance and privacy regulations. On their own, those pieces may not be a big risk, but collectively they could be a problem. “It would be difficult to regulate such things before they happen,” the report notes, “and any unforeseeable consequences or control issues may be beyond governance once they occur.”<br></p><h2>AI in IA </h2><p>Questions of risk, governance, and control are where internal auditors come into the picture. There are similarities between deploying AI and implementing other software and technology, with similar risks, notes Will Bible, audit and assurance partner with Deloitte & Touche LLP in Parsippany, N.J. “The important thing to remember is that AI is still computer software, no matter what we call it,” he says. One area where internal auditors could be useful, Bible says, is assessing controls around the AI algorithms — specifically whether people are making sure the machine is operating correctly.</p><p>If internal auditors are just getting started with AI, their external audit peers at the Big 4 firms are already putting it to work as an audit tool. Bible and his Deloitte colleagues are using optical character recognition technology called Argus to digitize documents and convert them to a readable form for analysis. This enables auditors to use data extraction routines to locate data from a large population of documents that is relevant to the audit. </p><p>For auditors, AI speeds the process of getting to a decision point and improves the quality of the work because it makes fewer mistakes in data extraction. “You can imagine a day when you push a button and you’re given the things you need to follow up on,” Bible says. “There’s still that interrogation and investigation, but you get to that faster, which makes it a better experience for audit clients.”</p><p>QuantaVerse’s McLaughlin says internal auditors could take AI even farther by applying it to areas such as fraud investigation and compliance work. For example, rather than relying on auditors or compliance personnel to catch potential anti-bribery violations, internal audit could use AI to analyze an entire data set of expense reports to identify cases of anomalous behavior that require the most scrutiny. “Now internal audit has the five cases that really need a human to understand and investigate,” McLaughlin says. “That dramatically changes the effectiveness of an internal audit department to protect the organization.” </p><p>The key there is making sure a person is still in the loop, Bible says. “The nature of AI systems is you are throwing them into situations they probably have not seen yet,” he notes. A person involved in the process can evaluate the output and correct the machine when it is wrong. </p><h2>Building Intelligence</h2><p>Bible and McLaughlin both advise internal audit departments to start with a small project, before expanding their use of AI tools. That goes for the organization, as well. Organizations first will need to take stock of their data assets and get them organized, a task where internal auditors can provide assistance. </p><p>For audit executives such as Gurman, the objective is to get up to speed as fast as possible on AI and all its related risks, so they can educate the audit committee and the board. “There is a lot of unknown,” she concedes. “What risks are we bringing into the organization by being more efficient and using robots instead of human beings? Use of new technologies brings new risks.” ​​</p><table class="ms-rteTable-4" width="100%" cellspacing="0"><tbody><tr class="ms-rteTableEvenRow-4"><td class="ms-rteTableEvenCol-4" style="width:100%;"> <br> <p> <strong>AI in the Real World </strong></p><p>Still think artificial intelligence is science fiction? Here are some examples of how companies are putting it to use.</p><ul><li> <strong>Agriculture</strong>. Produce grower NatureSweet uses AI to examine data that it can apply to better control pests and diseases that affect crop production. The company estimates AI could help it increase greenhouse output by 20 percent annually, CNN reports. Meanwhile, equipment maker John Deere recently spent $305 million to purchase robotics firm Blue River Technology, whose AI-based equipment scans fields, assesses crops, and sprays weeds only where they are present. And Coca-Cola uses AI algorithms to predict weather patterns and other conditions that might impact crop yields for its orange juice products.</li><li> <strong>Aviation.</strong> GE Digital uses AI to cull through data collected from sensors to assess the safety and life expectancy of jet engines, including their likelihood for failure. The company estimates that a single flight can generate as much data as a full day of Twitter posts.</li><li> <strong>Finance.</strong> Machine learning enables lawyers and loan officers at JPMorgan to identify patterns and relationships in commercial loan agreements, a task that once required 360,000 man hours, Bloomberg reports. Bank of America uses natural language technology to extract information from voice calls that might reveal things like sales practice or regulatory issues. On the stock market, an estimated 60 percent of Wall Street trades are executed by AI, according to Christopher Steiner's book <em>Automate This</em>. </li><li> <strong>Marketing.</strong> Kraft used an AI algorithm to analyze customer preference data that helped it make changes to its cream cheese brand.</li><li> <strong>Retail.</strong> Fashion retailer Burberry's applies AI-based image recognition technology to determine whether products in photographs are genuine, spotting counterfeits with 98 percent accuracy, according to a <em>Forbes Online</em> report.<br><br></li></ul></td></tr></tbody></table><p></p>Tim McCollum1
The IT Governance Gap IT Governance Gap<p>​Executives see benefits from IT governance, but many aren't doing enough about it, according to ISACA's <a href="" target="_blank">Better Tech Governance Is Better for Business</a> report. Ninety-two percent of executives surveyed report IT governance has led to better outcomes, and 89 percent say it makes the business more agile. Yet, 69 percent say organizations need a stronger alignment of IT and business goals. ISACA surveyed 732 members, in 87 countries, who are board members, senior executives, managers, and professionals.</p><p>"The boardroom must become hyper-vigilant in ensuring a tight linkage between business goals and IT goals, fully leveraging business technology to improve business outcomes while diligently safeguarding the organization's digital assets," ISACA CEO Matt Loeb says.</p><p>The top IT governance challenges respondents foresee in the next 12 months are cybersecurity policies and defenses (44 percent), risk management priorities (36 percent), and alignment between IT objectives and overall business objectives (35 percent). It's not surprising that cybersecurity ranked so highly, the report notes. "Boardroom worries over increased internal and external threats are so great (61 percent) that almost half (48 percent) of leadership teams have prioritized investments in cyber defense improvements over other programs, including digital transformation and cloud," it states.</p><p>Despite their concern, only 55 percent of respondents say the board and senior executives are doing all they can to protect digital assets and data records. Just 29 percent report their organization continuously assesses IT risk.</p><p>The good news is leadership teams are increasing spending on cybersecurity through security consultants (27 percent), network perimeter defense upgrades (25 percent), and cyber insurance (17 percent). However, most organizations aren't planning to spend more on cybersecurity and privacy-related training for employees and board members in the next 12 months.</p><p>Although respondents say boards and executives are taking greater interest in IT governance, the content of their meetings doesn't reflect that interest. Twenty-one percent of respondents say their board and senior management discuss IT risk topics such as cybersecurity and disaster recovery at every meeting, while 39 percent say they discuss them at some meetings. About one-third only discuss IT risk topics as needed.</p><p>Going forward, respondents say senior leaders must demonstrate that their organization has effective IT governance by:</p><ul><li>Ensuring alignment between IT and stakeholder needs (58 percent).</li><li>Monitoring and measuring results toward goals (39 percent).</li><li>Providing strong chairman, CEO, or executive guidance (33 percent).</li><li>Having strong engagement by business units and employees (30 percent).</li></ul><p><br></p><p>"There is much work to do in information and technology governance," Loeb acknowledges. "Committing to a boardroom with technology savvy and experience strongly represented provides the needed foundation for organizations to effectively and securely innovate through technology." </p><p><br></p>Tim McCollum0
Tech Vs. Fraud Vs. Fraud<p>​Organizations are adding more technology capabilities to their fraud investigation teams, according to the <a href="" target="_blank">Association of Certified Fraud Examiners'</a> (ACFE's) In-house Fraud Investigation Teams: 2017 Benchmarking Report. Building forensics and cybersecurity expertise is a big focus among the nearly 1,500 anti-fraud professionals who responded to the global survey. Forty-three percent say their organization is seeking or expected to add expertise in digital forensics to its fraud investigation team. Conversely, 39 percent say their team currently has such skills.</p><p>Additionally, 36 percent of respondents say their fraud team has cybersecurity skills. A similar percentage (37 percent) is looking to add those skills. Most fraud investigation teams aren't investigating cyber fraud and hacking, possibly reflecting a lack of expertise. Only 16 percent investigate such incidents frequently, while 27 percent investigate them occasionally. </p><p>Frauds that teams investigate frequently are:</p><ul><li>Employee embezzlement (40 percent).</li><li>Frauds committed by customers (40 percent).</li><li>Frauds committed by vendors or contractors (32 percent).</li><li>Human resources issues (30 percent).</li></ul><p> <br> </p><p>Some fraud investigators have a lot on their plates, the survey notes. Although most (51 percent) work on fewer than five cases at a time, 30 percent work on 10 or more cases concurrently. </p><p>And fraud investigations are only part of their jobs. The average team spends 56 percent of its time on investigations. The rest of the time, investigators are working in areas such as internal audit, compliance, and information security.</p><p>The high case load and demands on investigators' time would seem to call for some automated assistance, but most respondents say their organization isn't using such technologies. Just 38 percent use case management software. Forty-seven percent use data analytics software, including spreadsheet software, in their work. That's not because they don't know how to use it — most respondents (62 percent) say they have analytics and data mining skills on their team.</p><p>And what about the results of these investigations? Most respondents say their team substantiates the majority of cases, with 34 percent reporting that they substantiate more than three-fourths of alleged frauds. Forty-six percent of respondents say most fraud investigations result in disciplinary action, but just 17 percent say their team refers most investigations for prosecution. Indeed, 70 percent of respondents report that they refer one-fourth of cases or fewer for prosecution.</p><p>Organizations are even less likely to recover fraud losses, the report finds. Just 24 percent say they recover more than half of fraud losses, while 59 percent recover one-fourth or fewer. </p><p> <br> </p>Tim McCollum0

  • Gleim IAO _Mar2018_Premium 1
  • IIA Cert_CIALS_IAO_Mar2018_CX
  • IIA Audit-Intelligence_IAO_Mar2018_Premium 3