People-centric Innovation Innovation<p>​People seem to get lost in discussions about digital transformation, yet they are at the center of today's business innovations. Technology, customization, and sustainability are the three main drivers changing consumer behavior, according to a Euromonitor International report. Meanwhile, Gartner analysts say tomorrow's technology will be people-centric.</p><p>The ubiquity of smartphones has made technology indispensable in everyday life, Euromonitor's <a href="" target="_blank">2019 Megatrends: State of Play</a> notes. Customization builds on technologies such as artificial intelligence (AI) and 5G to reinvent how people shop. Sustainability reflects growing consumer activism about environmental impacts, the London-based research firm says. </p><p>"The change in consumer demands will contribute toward a rise in investments amongst emerging economies, driving businesses to develop innovative strategies to meet those demands," says Alison Angus, head of lifestyles at Euromonitor.</p><h2>More Than Meets the Eye</h2><p>Euromonitor predicts eight changes in consumer behavior will cause the greatest disruption across industries. By analyzing these megatrends, organizations can build long-term strategies to remain relevant, the report notes.</p><p> <strong>Connected Consumers</strong> Smartphones and other connected devices have given consumers multiple ways to interact with digital content and companies, but they also have created greater dependency on those devices. In this time-pressed environment, organizations will need to ensure that consumer interactions provide value, the report advises.</p><p> <strong>Healthy Living</strong> Consumers have a holistic interest in physical, mental, and spiritual wellness. The report points to growing interest this year in health-related technology services such as genetic testing and personalized nutrition analysis.</p><p> <strong>Ethical Living</strong> Almost three in 10 consumers say purchasing eco- or ethically conscious products makes them feel good, according to Euromonitor's 2019 Lifestyles Survey. Euromonitor predicts concern about the environment, sustainability, and labor practices will be one of the most significant market disruptors in the coming years.</p><p> <strong>Shopping Reinvented</strong> Connectivity gives consumers more information about products and services, so those organizations need to be able to engage with them "anytime and anywhere," the report says. These consumers demand more personalization, budget-friendly experiences, and greater convenience.</p><p> <strong>Experience More</strong> Euromonitor points out that consumers are seeking experiences more than possessions. This is pushing businesses to emphasize experiences, experiment with marketing strategies, and become more consumer-centric.</p><p> <strong>Middle Class Retreat</strong> The middle class in developed nations is becoming more budget-conscious and selective about purchases, the report finds. Yet because the middle class remains vital to the consumer market, this megatrend may impact other megatrends.</p><p> <strong>Premiumization</strong> Consumers will spend more on the products and services that matter most to them, Euromonitor says. They are seeking more personalized and convenient service, and products that appeal to wellness and happiness.</p><p> <strong>Shifting Market Frontiers</strong> Euromonitor says economic power is shifting to fast-growing economies in Asia and Africa. Businesses investing in those regions will need strategies that are "sensitive to the environment and local communities."</p><h2>Building Smart Spaces</h2><p>Technology trends are revolving around people, too. At this month's Gartner IT Symposium in Orlando, <a href="" target="_blank">Gartner analysts identified 10 strategic technology trends</a> that may reach "a tipping point" within the next five years. All are based on the concept of creating "smart spaces" in which people and technology interact. </p><p>"Multiple elements — including people, processes, services, and things — come together in a smart space to create a more immersive, interactive, and automated experience," says David Cearly, a vice president at Gartner.</p><p> <strong>Hyperautomation</strong> Although this trend began with robotic process automation, it now involves combinations of machine learning and automation tools. In addition to tools, organizations need to understand all the steps of automation and how automation mechanisms can be coordinated, Gartner says.</p><p> <strong>Multiexperience</strong> Garter predicts technologies such as conversational platforms, virtual reality, and augmented reality will shift the user experience by 2028. This will enable businesses to provide a multisensory experience for delivering "nuanced information."</p><p> <strong>Democratization of Expertise</strong> This trend is about giving people access to technical and business expertise without extensive training. For example, Gartner expects that data analytics, application development, and design tools will become more useful for people who don't have specialized knowledge. </p><p> <strong>Human Augmentation</strong> Once the stuff of science fiction, Gartner predicts the use of technology to augment physical and cognitive abilities will grow over the next 10 years. An example would be a wearable device that provides access to information and computer applications.</p><p> <strong>Transparency and Traceability</strong> Privacy regulations and emphasis on protecting personal data have made transparency and traceability key elements of data ethics. In building these capabilities, Gartner says organizations should focus on AI, ownership and control of personal data, and ethically aligned design.</p><p> <strong>The Empowered Edge</strong> Edge computing puts information processing, content collection, and delivery closer to the sources and consumers of that information. Although currently focused on the Internet of Things, Gartner expects edge computing to expand to applications such as robots and operational systems. </p><p> <strong>Distributed Cloud </strong>This technology distributes public cloud services to other locations, in contrast to the current centralized cloud model. </p><p> <strong>Autonomous Things</strong> These physical devices use AI to automate functions once performed by people. The concept is to develop technologies that interact "more naturally with their surroundings and with people," Gartner says.</p><p> <strong>Practical Blockchain</strong> New blockchain applications such as asset tracking and identity management can provide more transparency, lower costs, and settle transactions more quickly, Gartner says. </p><p> <strong>AI Security</strong> Increased use of AI for decision-making raises new security challenges such as increasing points of attack. Gartner advises focusing on protecting AI-powered systems, using AI to enhance defenses, and looking out for AI-based attacks.</p>Tim McCollum0
Top Challenges of Automating Audit Challenges of Automating Audit<p>Organizations are rapidly adopting technologies such as cloud computing, robotic process automation (RPA), machine learning, blockchain, and cognitive computing to create tomorrow’s business in today’s market. Internal audit needs to transform its processes to keep pace with these changes, and IT audit processes are an excellent place to start this transformation.</p><p>Organizations that still perform most internal audit tasks manually complicate IT governance. In this manual model, auditors have adopted many compliance laws, policies, procedures, guidelines, and standards, along with their related control objectives. Moreover, internal audit manages audit process elements such as training, standards, risk, planning, documentation, interviews, and findings separately. </p><p>An automated internal audit process can enable the audit function to link, consolidate, and integrate the planning, performance, and response steps of the audit process into a holistic approach. The process should present audit recommendations in a way that is dynamically sustainable within the organization’s integrated action plans. </p><p>Since 2012, many standards and frameworks have changed their models, procedures, and guidelines to elaborate on the role of the IT governance process. Accordingly, internal audit should redesign its proces-ses to coincide with new, streamlined IT processes and related roles. Meanwhile, IT audit specialists should understand the interoperability among the conceptual models of IT management, governance, standards, events, audits, and planning.</p><p>Transforming audit processes comes with challenges, though. Each of these challenges can be encapsulated in a pattern of a problem and a solution, which internal audit can prioritize based on its stakeholders’ needs.</p><h2>1. Syncing the IT Audit Process With IT Project Planning </h2><p><strong>Problem:</strong> IT audit teams need a way to link, tailor, and update audit findings and recommendations for ongoing IT projects and action plans. This will be necessary for auditors to follow up on findings and identify who is responsible for carrying out audit recommendations. </p><p><strong>Solution:</strong> An automated IT audit system would break IT audit work into two levels — findings’ recommendations and their final conditions — encompassing all preventive, detective, and corrective controls. The recommended actions reported in audit findings should be linked, integrated, and synced by their related IT project’s nondisclosure agreements, service-level agreements, and contracts. Then, the automated IT audit system should confirm that management addressed the recommendation.</p><h2>2. Letting IT Governance Direct the IT Audit Process</h2><p><strong>Problem:</strong> The role of the IT audit team in corporate governance is important because the function can help bridge the gap between the business and IT in organizations. IT governance is a key part of corporate governance, which directs and monitors the finance, quality, operations, and IT functions. Three of these functions — finance, quality, and operations — are being transformed by innovative, technology-based processes. Thus, the problems are how the board and executives will design and implement a corporate governance system and how the IT governance process will be automated. </p><p><strong>Solution:</strong> Automating the IT governance process should be comprehensive and agile. In other words, the IT governance, risk, and control mapping and cascading of goals and indicators among all levels of the organization must be user-friendly. To have an agile audit function, though, these maps and cascades should be tailored based on the types of governance roles such as the board, executives, internal auditors, chief information officer, and IT manager. </p><p>The internal audit function also should map key performance indicators based on the control objectives of various regulations, standards, and frameworks into its goals. These governance requirements include frameworks from The Committee of Sponsoring Organizations of the Treadway Commission and the U.S. National Institute of Standards and Technology (NIST), industry requirements such as the Payment Card Industry Data Security Standard, and regulations such as the European Union’s General Data Protection Regulation and the U.S. Sarbanes-Oxley Act of 2002.</p><h2>3. Transforming IT Audit Processes to a DevOps Review</h2><p><strong>Problem:</strong> Nowadays, some nonfunctional requirements such as cybersecurity, machine learning, and blockchain are being inherently changed to functional requirements. This change will have fundamental effects on the IT audit process. For example, IT auditors will need to assess cybersecurity or blockchain requirements during the organization’s system development operations (DevOps) process and change their audit program schedule to fit the DevOps schedule. This change can be a real challenge, especially for small and medium-sized audit teams that lack skills and experience with DevOps. </p><p><strong>Solution:</strong> Internal audit could solve this problem by moving to an “IT audit as an embedded DevOps review service” model. As a result, the review processes for IT governance, risk, and controls must be embedded into the DevOps life cycle. As part of this process, an automated system may provide access to metadata. For example, an auditor could set up a software robot to collect evidence about risks related to vendor lock-in, changes in vendors, and data conversion. Similarly, gathering cloud provider metadata through RPA can enable internal audit to respond to other cloud-based risks.</p><p>Generally the business model must be clear, well-defined, mature, and well-documented when any kind of business, especially IT audit, wants to migrate to the cloud. The IT audit process also will be streamlining and maturing in the cloud as a system. Thus, the cloud and robotic process automation can bring an iterative business model in which the IT audit process is transformed into a cognitive computing system. This system could result in more affordable audit costs and enable IT auditors to perform more engagements each year based on automated best practices.</p><h2>4. Mitigating IT Standards’ Side Effects </h2><p><strong>Problem:</strong> Applying some IT standards is analogous to a drug interfering with other drugs and having adverse effects on a body. Without a unified medicine solution, a prescription may not provide the greatest benefits and the fewest negative effects. Likewise, internal audit should ensure the side effects of IT standards do not cause problems such as increasing compliance costs. Auditors must address two issues:</p><p>Deciding which sections of IT-related standards such as COBIT 5, ISO 2700, and NIST Special Publication 800-30 best conform with the organization’s risk management framework.</p><p>Addressing conflicts and duplications among the various standards that might result in duplicate control objectives.</p><p><strong>Solution:</strong> An automated IT audit system should use machine learning and recommendation systems to remove the similar or contradictory control objectives of IT standards. This way, the audit system can control the duplications among all of the standards’ segments and use artificial intelligence to recommend an efficient and customized set of controls. </p><h2>Transforming the Auditor</h2><p>For automation to overcome these challenges, internal auditors must transform themselves, as well. This is an area in which IT audit specialists can help organizations find, prioritize, and invest in the right innovations to automate IT, internal audit, and cybersecurity processes. Moreover, by identifying ways to automate IT governance, risk, and controls, internal audit can help the IT function align its operations with the organization’s governance and transformation processes.  <br></p>Seyyed Mohsen Hashemi1
Cybersecurity's Key Ally's Key Ally<h2>What is the relationship between IT and internal audit in cybersecurity and preparedness? </h2><p>IT is essentially the assets that cybersecurity is supposed to be protecting. Internal audit should ensure technical and nontechnical controls are in place and operating effectively. Internal audit personnel must become intimately familiar with cybersecurity and how to test the effectiveness of cyber controls. Too often, internal auditors are not technical enough to assess whether an organization’s cyber controls are adequate to protect the assets they were put in place to protect. <br></p><p>Internal audit’s most valuable role is ensuring cybersecurity functions have the resources necessary to protect the organization effectively. Whether those resources be funding, staffing, or data from the organization’s IT systems, internal audit should be a strong advocate for the cybersecurity function by raising awareness around the organization’s cybersecurity needs. Internal audit assessment results should be a major topic in C-suite briefings to ensure the cybersecurity function receives the support necessary to protect the organization.<br></p><h2>How can an organization ensure its employees do not become part of a social engineering attack?</h2><p>Employees should be trained to identify and avoid becoming a victim of social engineering as part of an effective cyber education and awareness program, where frequent simulation exercises are a core component. The results of these exercises should be communicated across the organization, and C-suite executives should be kept up to speed on how the various areas of the company score on these exercises. Some organizations have begun to factor the results into employee performance reviews. For example, if an employee continuously fails phishing tests, that employee may be subjected to extra training, or his or her yearly performance rating might be impacted. Regardless of the consequences, C-level support is critical to raising awareness of social engineering among employees. </p>Staff0
The Business of Ethical AI Business of Ethical AI<p>For artificial intelligence (AI) to reach its potential, people must be able to trust it, according to Angel Gurría, secretary general of the Organisation for Economic Co-operation and Development (OECD). Speaking this month at the <a href="" target="_blank">London Business School</a>, Gurría noted that human-centered AI could increase productivity and foster "inclusive human progress." In the wrong hands, though, it could be misused. </p><p>"Ethics is the starting point," he said, "that divides what you should do and what you should not do with this kind of knowledge and information and technology."</p><p>The OECD is among several organizations and government bodies that are raising questions and issuing proposals about how AI can be implemented ethically. For most of these organizations, the key word is <em>trust</em>. But calls for ethical AI may be falling on deaf ears among businesses.</p><h2>Businesses Aren't Worried</h2><p>In a survey of more than 5,300 employers and employees in six countries, nearly two-thirds of employers say their organization would be using AI by 2022. However, 54% of employers say they aren't concerned that the organization could use AI unethically, according to the study by Genesys, a customer-experience company in San Francisco. Similarly, 52% aren't worried that employees would misuse AI.</p><p>Just over one-fourth of these employers are concerned about future liability for "unforeseen use of AI," the company notes in a press release. Currently, only 23% of employers have a written policy for using AI and robots ethically. Among employers that lack a policy, 40% say their organization should have one.</p><p>"We advise companies to develop and document their policies on AI sooner rather than later," says Merijn te Booij, chief marketing officer at Genesys. Those organizations should include employees in the process, te Booij advises, "to quell any apprehension and promote an environment of trust and transparency."</p><p>That word again: trust.</p><p>"Trust is still foundational to business," writes Iain Brown, head of Data Science at SAS UK and Ireland, this month on <a href="" target="_blank"> <em>TechRadar</em></a>. Brown says one-fourth of consumers will act if they think an organization doesn't respect or protect their data. </p><p>Despite laws such as the European Union's (EU's) General Data Protection Regulation, consumers may expect greater transparency than current regulations stipulate — particularly where "data meets AI," Brown says. He advises asking three questions to determine whether the organization is using AI ethically:</p><ul><li>Do you know what the AI is doing?</li><li>Can you explain it to customers?</li><li>Would customers respond happily when you tell them?</li></ul><h2>Governments Propose Guidelines</h2><p>Building ethical, trustworthy AI is at the core of the several plans, guidelines, and research initiatives sponsored by governments and nongovernmental organizations. In April, the European Commission issued <a href="" target="_blank">Ethics Guidelines for Trustworthy Artificial Intelligence</a> based on the idea that AI should be lawful, ethical, and robust. The OECD followed that in May by releasing <a href="" target="_blank">principles for responsible stewardship of trustworthy AI</a>.</p><p>The European Commission guidelines set out seven requirements for trustworthy AI: </p><ul><li>AI should empower people and have appropriate oversight. </li><li>AI systems should be resilient and secure.</li><li>AI should protect privacy and data and be under adequate data governance.</li><li>Data, system, and AI business models should be transparent.</li><li>AI should avoid unfair bias.</li><li>AI should be sustainable and environmentally friendly.</li><li>Mechanisms should be in place — including auditability — to ensure responsibility and accountability over AI.</li></ul><p> <br> </p><p>The European Commission recently launched a <a href="" target="_blank">pilot test of its guidelines</a>. It includes an online survey — open until Dec. 1 — and interviews with select public- and private-sector organizations. </p><p>Another aspect of the pilot phase are recommendations for EU and national policy-makers from the European Commission's High-Level Expert Group on Artificial Intelligence. AI that respects privacy, provides transparency, and prevents discrimination "can become a real competitive advantage for European businesses and society as a whole," says Mariya Gabriel, European Commissioner for Digital Economy and Society.</p><p>Additionally, France, Germany, and Japan have raised $8.2 million to fund research into human-centered AI, <a href=""> <em>Inside Higher Ed</em> reports</a>. The research would focus on the democratization of AI, the integrity of data, and AI ethics. </p><p>Meanwhile in the U.S., the National Institute for Standards and Technology (NIST) has released a plan aimed at developing AI-related technical standards and tools. Such standards are needed to promote innovation as well as public trust in AI technologies, NIST says. </p><p>To those ends, <a href="" target="_blank">U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools</a> (PDF) recommends bolstering AI standards-related knowledge and coordination among federal government agencies. It also calls for promoting research into how trustworthiness can be incorporated into AI standards and tools. Moreover, it advocates using public-private partnerships to develop standards and tools, and working with international parties to advance them.</p><h2>Trust With a Capital "T"</h2><p>The varying strands of AI ethics development are in the early stages, though. Meanwhile, the technology is advancing well ahead of any standards. In his speech in London, OECD's Gurría said AI can benefit society if people have the tools and the tools can be trusted. "Artificial intelligence can help us if we apply it well," he said. </p>Tim McCollum0
Transforming Assurance Assurance<p>​The IIA's Core Principles for the Professional Practice of Internal Auditing use the term <em>risk-based assurance</em> instead of <em>reasonable assurance</em>, which implies that there are different levels of assurance based on multiple risk factors. That creates an opportunity for internal audit to move its work to a higher level by delivering enhanced assurance to the board and management. </p><p>Enhanced assurance does not imply reductions in risk. Instead, it refers to asking better questions about the risks that matter as well as the risks that should be automated for greater efficiency. It's about developing assurance at scale to cover the breadth of operations and strategic initiatives efficiently and cost-effectively.</p><p>Computerized fraud detection is one example of delivering assurance at scale. In 2002, WorldCom internal auditor Gene Morse discovered a $500 million debit in a property, plant, and equipment account by searching a custom data warehouse he had developed. Morse's mining of the company's financial reporting system ultimately uncovered a $1.7 billion capitalized line cost entry made in 2001, according to the <em>Journal of Accountancy</em>. </p><p>This example illustrates how fraud or intentional errors can occur in limited transactions with catastrophic outcomes. Enhanced assurance techniques such as data mining can uncover these transactions, which traditional audit techniques such as discovery, stratification, and random sampling may miss. Today's technologies can enable internal audit functions to automate their operations and provide enhanced assurance, but to do so, they must reframe their strategy. </p><h2>Better Teams</h2><p>Data analytics and audit automation platforms provide internal auditors with the means to build assurance at scale whether a novice or expert. The technologies also create the opportunity to form better teams. </p><p>Small, focused teams are more productive than large, consensus-driven teams directed from the top down, author Jacob Morgan notes. Writing in <em>Forbes</em>, Morgan cites Amazon CEO Jeff Bezos' "two-pizza" rule: "If a team cannot be fed by two pizzas, then that team is too large." Morgan says having more people on the team increases the communication needed and bureaucracy, which can slow the team down.</p><p>Collaboration with automation can modernize the performance of small teams. Intelligent automation can integrate oversight into operations, reduce human error, improve internal controls, and create situational awareness where risks need to be managed. Automation-enabled collaboration can help reduce redundancies in demands on IT departments, as well. However, efficiency transformations often fail when projects underestimate the impact of change on people. </p><h2>The Human Element</h2><p>Many of the biggest assurance risks are related to people, but too often the weakest link is related to auditing human behavior. The 2018 IBM X-Force Threat Intelligence Index finds "a historic 424% jump in breaches related to misconfigured cloud infrastructure, largely due to human error." IBM's report assumes decisions, big or small, contribute to risks. However, the vulnerabilities in human behavior and the intersection of technology represent a growing body of risks to be addressed. </p><p>Separate studies from IBM, the International Risk Management Institute, and the U.S. Department of Defense find that human error is a key contributor to operational risk across industry type and represents friction in organizational performance. The good news is automation creates an opportunity to reduce human error and to improve insights into operational performance. Chief audit executives (CAEs) can collaborate with the compliance, finance, operations, and risk management functions to develop automation that supports each of these key assurance providers and stakeholders. </p><h2>The Role of Technology</h2><p>Technology enables enhanced assurance by leveraging analy-tics to ask and answer complex questions about risk. Analytics is the key to finding new insights hidden within troves of unexplored data in enterprise resource planning systems, confidential databases, and operations. </p><p>Technology solutions that improve situational awareness in audit assurance are ideally the end goal. Situational awareness in auditing is not a one-size-fits-all approach. In some organizations, situational awareness involves improved data analysis; in others, it may include a range of continuous monitoring and reporting in near real-time. </p><p>Intelligent automation addresses issues with audit efficiency and quality. First, auditors spend, on average, half their time on routine processes that could be automated, improving consistency of data and reductions in error rates. Data governance allows other oversight groups to leverage internal audit's work, reducing redundancy of effort. </p><p>Second, smart automation leads to business intelligence. As more key processes are automated, they provide insights into changing conditions that may have been overlooked using periodic sampling techniques at points in time. </p><p>Most events are high frequency but low impact, yet auditors, IT staff, and risk and compliance professionals spend the bulk of their time chasing down these events. That leaves little time for them to focus on the real threats to the organization. Automation works best at solving high frequency events that are routine and add little value in terms of new information on known risks. Instead of focusing on the shape of risk, auditors will be able to drill down into the data to understand specific causes of risk.</p><h2>Steps to Enhanced Assurance</h2><p>Before buying automation, CAEs should answer three questions: How will automation improve audit assurance? How will automation make processes more efficient? How will auditors use it to improve audit judgment?</p><p>The CAE should consider automation an opportunity to raise awareness with the board and senior executives about enhanced assurance and better risk governance. To do so, internal audit must align enhanced assurance with the strategic objectives of senior executives. </p><p>To implement enhanced assurance in the internal audit function, CAEs should follow three steps:</p><p></p><ul><li>Identify the greatest opportunities to automate routine audit processes.</li><li>Prioritize automation projects during each budget cycle in coordination with the operations, risk management, IT, and compliance functions. </li><li>Consider the questions most important to senior executives: Which risks pose the greatest threat to the organization's goals? How well do we understand risk uncertainties across the organization? Do existing controls address the risks that really matter?</li></ul><h2>Assurance and Transformation</h2><p>The World Economic Forum calls today's digital transformation the fourth Industrial Revolution and forecasts that it could generate $100 trillion for business and society by 2025. Every business revolution has been disruptive, and this one will be no exception. The difference in outcomes will depend largely on how well organizations respond to change.</p><p>Forward-looking internal audit departments already are delivering enhanced assurance by strategically focusing on the roles people, technology, and automation play in creating higher confidence in assurance. Other audit functions are in the early stage of transformation. Although these audit functions will make mistakes along the way, now is the time for them to build new data analysis and data mining skills, and to learn the strengths and weaknesses of automation. As these tools become more powerful and easy to use, enhanced assurance will set a new high bar in risk governance. </p>James Bone1
Stronger Assurance Through Machine Learning Assurance Through Machine Learning<p>​By now, most internal audit functions have likely implemented rule-based analytics capabilities to evaluate controls or identify data irregularities. While these tools have served the profession well, providing useful insights and enhanced stakeholder assurance, emerging technologies can deliver even greater value and increase audit effectiveness. With the proliferation of digitization and wealth of data generated by modern business processes, now is an opportune time to extend beyond our well-worn approaches.</p><p>In particular, machine learning (ML) algorithms represent a natural evolution beyond rule-based analysis. Internal audit functions that incorporate ML beyond their existing toolkit can expect to develop new capabilities to predict potential outcomes, identify patterns within data, and generate insight difficult to achieve through rudimentary data analysis. Those looking to get started should first understand common ML concepts, how ML can be applied to audit work, and the challenges likely to arise along the way. </p><h2>What Is Machine Learning?</h2><p>ML is a branch of artificial intelligence (AI) featuring algorithms that learn from past patterns and examples to perform a specific task. How does an ML algorithm "learn," and how is this different from rule-based systems? Rule-based systems generate an outcome by evaluating specific conditions — for example, "If it is raining, carry an umbrella." These systems can be automated — such as through the use of robotic process automation — but they are still considered "dumb" and incapable of processing inputs unless provided explicit instructions.</p><p>By contrast, an ML model generates probable outcomes for "Should I carry an umbrella?" by taking into account inputs such as temperature, humidity, and wind and combining these with data on prior outcomes from when it rained and when it did not. Machine learning can even consider the user's schedule for the day to determine if he or she will likely be outdoors when rain is predicted. With ML models, the best predictor of future behavior is past behavior. Such systems can generate useful real-world insights and predictions by inferring from past examples. </p><p>As an analogy, most people who have built objects using a Lego set, such as a car, follow a series of rules — a step-by-step instruction manual included with the construction toys. After building the same Lego car many times, even without written instructions, an individual would acquire a reasonable sense of how to build a similar car given the Lego parts. Likewise, an ML algorithm with sufficient training — prior practice assembling the Lego car — can provide useful outcomes (build the same car) and identify patterns (relationships between the Lego parts) given an unknown set of inputs (previously unseen Lego parts) even without instructions. </p><h2>Common Concepts</h2><p>The outcomes and accuracy of ML algorithms are highly dependent on the inputs provided to them. A conceptual grasp of ML processes hinges on understanding these inputs and how they impact algorithm effectiveness.</p><p> <strong>Feature</strong> Put simply, a feature is an input to a model. In an Excel table populated with data, one data column represents a single feature. The number of features, also referred to as the dimensionality of the data, varies depending on the problem and can range up to the hundreds. If a model is developed to predict the weather, data such as temperature, pressure, humidity, types of clouds, and wind conditions comprise the model's features. ML algorithms are well-suited to such multidimensional analysis of data.</p><p> <strong>Feature Engineering</strong> In a rule-based system, an expert will create rules to determine the outcome. In an ML model, an expert selects the specific features from which the model will learn. This selection process is known as feature engineering, and it represents an important step toward increasing the algorithm's precision and efficiency. The expert also can refine the selection of inputs by comparing the outcomes of different input combinations. Effective feature engineering should reduce the number of features within the training data to just those that are important. This process will allow the model to generalize better, with fewer assumptions and reduced bias.</p><p> <strong>Label</strong> An ML model can be trained using past outcomes from historical data. These outcomes are identified as labels. For instance, in a weather prediction model, one of the labels for a historical input date might be "rained with high humidity." The ML model will then know that it rained in the past, based on the various temperature, pressure, humidity, cloud, and wind conditions on a particular day, and it will use this as a data point to help predict the future.</p><p> <strong>Ensemble Learning</strong> One common way to improve model accuracy is to incorporate the results of multiple algorithms. This "ensemble model" combines the predicted outcomes from the selected algorithms and calculates the final outcome using the relative weight assigned to each one.</p><p> <strong>Learning Categories</strong> The way in which an ML algorithm learns can generally be separated into two broad categories — supervised and unsupervised. Which type might work best depends on the problem at hand and the availability of labels. </p><ul><li>A <em>supervised learning</em> algorithm learns by analyzing defined features and labels in what is commonly called the training dataset. By analyzing the training dataset, the model learns the relationship between the defined features and past outcomes (labels). The resulting supervised learning model can then be applied to new datasets to obtain predicted results. To assess its precision, the algorithm will be used to predict the outcomes from a testing dataset that is distinct from the training dataset. Based on the results of this training and testing regime, the model can be fine-tuned through feature engineering until it achieves an acceptable level of accuracy. <br><br></li><li>Unlike supervised learning, <em>unsupervised learning</em> algorithms do not have past outcomes from which to learn. Instead, an unsupervised learning algorithm tries to group inputs according to the similarities, patterns, and differences in their features without the assistance of labels. Unsupervised learning can be useful when labeled data is expensive or unavailable; it is effective at identifying patterns and outliers in multidimensional data that, to a person, may not be obvious. </li></ul><h2>Stronger Assurance</h2><p> <img src="/2019/PublishingImages/Lee-overview-of-ML-payment-analytics.jpg" class="ms-rtePosition-2" alt="" style="margin:5px;width:600px;height:305px;" />An ML model's capacity to provide stronger assurance, compared to rule-based analysis, can be illustrated using an example of the technology's ability to identify anomalies in payment transactions. "Overview of ML Payment Analytics" (right) shows the phases of this process.</p><p>Developing an ML model to analyze payment transactions will first require access to diverse data sources, such as historical payment transactions for the last three years, details of external risk events (e.g., fraudulent payments), human resource (HR) data (e.g., terminations and staff movements), and details of payment counterparties. Before feature engineering work can start, the data needs to be combined and then reviewed to verify it is free of errors — commonly called the extract, transform, and load phase. During this phase, data is extracted from various source systems, converted (transformed) into a format that can be analyzed, and stored (loaded) in a data warehouse.</p><p>Next, the user performs feature engineering to shortlist the critical features — such as payment date, counterparty, and amount — the model will analyze. To refine the results, specific risk weights, ranging from 0 to 1, are assigned to each feature based on its relative importance. From experience, a real-world payment analytics model may use more than 150 features. The ability to perform such multidimensional analysis of features represents a key reason to use ML algorithms instead of simple rule-based systems.</p><p>To begin the analysis, internal auditors could apply an unsupervised learning algorithm that identifies payment patterns to specific counterparties, potentially fraudulent transactions, or payments with unusual attributes that warrant attention. The algorithm performs its analysis by identifying the combination of features that fit most payments and producing an anomaly score for each payment, depending on how its features differ from all others. It then derives a risk score for each payment from the risk weight and the anomaly score. This risk score indicates the probability of an irregular payment. </p><p>"Payment Outliers" (below right) illustrates a simple model using only three features, with two transactions identified as outliers. The unsupervised learning model generates a set of potential payment exceptions. These exceptions are followed up to determine if they are true or false. The results can then be used as labels to incorporate supervised learning into the ML model, enabling identification of improper payments with a significantly higher degree of precision. </p><p>Supervised learning models can also be used to predict the likelihood of specific outcomes. By training an algorithm using labels on historical payment errors, the model can help identify potential errors before they occur. For example, based on past events a model may learn that the frequency of erroneous payments is highly correlated with specific features, such as high frequency of payment, specific time of day, or staff attrition rates. A supervised learning model trained with these labels can be applied to future payments to provide an early warning for potential payment errors.</p><p>This anomaly detection model can be applied to datasets with clear groups, though it should not contain significant transactions that differ greatly from most of the data. For instance, the model can be extended to detect irregularities in almost any area, including expenses, procurement, and access granted to employees. </p><h2>Deeper Insights</h2><p> <img src="/2019/PublishingImages/Lee-payment-outliers.jpg" class="ms-rtePosition-2" alt="" style="margin:5px;width:540px;height:511px;" />Continuing with the payment example, an ML model developed to analyze payment transactions can be used to uncover hidden patterns or unknown insights. Examples include: </p><ul><li>Identify overpayment for services by comparing the mean and typical variance in payment amounts for each product type — such as air tickets or IT services — and highlighting all payments that deviate significantly from the mean.<br><br> </li><li>Identify prior unknown emerging needs — such as different departments paying for a new service at significantly different prices — or client types by highlighting payment outliers. This insight could allow executives to optimize the cost for acquired products and services. <br><br></li><li>Identify multiple consecutive payments to a single counterparty below a specific threshold. This analysis would help identify suspicious payments that have been split into smaller ones to potentially escape detection. <br><br></li><li>Identify potential favoritism shown to specific vendors by pinpointing significant groups of payments made to these vendors or related entities. </li></ul><h2>Key Challenges</h2><p>Internal auditors are likely to encounter numerous challenges when applying ML technology. Input quality, biases and poor performance, and lack of experience with the technology are among the most common.</p><p> <strong>Availability of Clean, Labeled Data</strong> For any ML algorithm to provide meaningful results, a significant amount of high-quality data must be available for analysis. For instance, developing an effective payment anomaly detection model requires at least a year of transactional, HR, and counterparty information. Data cleansing, which involves correcting and removing erroneous or inaccurate input data, is often required before the algorithm can be trained effectively. Experience shows that data exploration and data preparation often consume the greatest amount of time in ML projects. Biases in the training data that are not representative of the actual environment will adversely impact the model's output. Also, without good labels — such as labels on actual cyber intrusions — and feature engineering, a supervised learning model will be biased toward certain outcomes and may generate noisy, or meaningless, results.</p><p> <strong>Poor Model Performance and Biases</strong> Most internal audit functions that embark on ML projects will initially receive disappointing or inaccurate results from at least some of their models. Potential sources of failure may include trained models that do not generalize well, poor feature engineering, use of algorithms that are ill-suited to the underlying data, or scarcity of good quality data. </p><p>Overfitting is another potential cause of poor model performance — and one that data scientists encounter often. An ML model that overfits generates outcomes that are biased toward the training dataset. To reduce such biases, internal audit functions use testing data independent of the training dataset to validate the model's accuracy. </p><p>Auditors should also be cognizant of each algorithm's inherent limitations. For example, unsupervised learning algorithms may produce noisy results if the data elements are unrelated and have few or no common characteristics (i.e., no natural groups). Some algorithms work well with inputs that are relatively independent of one another but would be poor predictors otherwise.</p><p> <strong>Lack of Experience</strong> Organizations new to ML may not have examples of successful ML projects to learn from. Inexperienced practitioners can acquire confidence in their fledging capabilities by first applying simple ML models to achieve better outcomes from existing solutions. After these initial successes, algorithms to improve the outcomes of these models can be progressively implemented in stages. For instance, an ensemble learning approach can be used to improve on the first model. If successful, more advanced ML methods should then be considered. This progressive approach can also alleviate the initial skepticism often present in the adoption of new technology.</p><h2>The Future of Audit</h2><p>Machine learning technology holds great promise for internal audit practitioners. Its adoption enables audit functions to provide continuous assurance by enhancing their automated detection capabilities and achieving 100% coverage of risk areas — a potential game changer for the audit profession. The internal audit function of the future is likely to be a data-driven enterprise that augments its capabilities through automation and machine intelligence. <br></p>Ying-Choong Lee1
New U.S. Security Agency's Statement of Intent U.S. Security Agency's Statement of Intent<p>​The U.S. federal government's new Cybersecurity and Infrastructure Security Agency (CISA) aims to be the nation's risk advisor, according to a <a href="" target="_blank">strategic intent document</a> (PDF) released this month. The CISA was established within the Department of Homeland Security in 2018 to address threats to U.S. technology and physical infrastructure.</p><p>The CISA's mission is to "lead the national effort to understand and manage cyber and physical risk to our critical infrastructure," the document notes. "The 21st century brings with it an array of challenges that are often difficult to grasp and even more difficult to address," CISA Director Christopher Krebs writes in the document. He cites risk factors such as the nation's reliance on networked technologies, nature-based threats, and technology failures. </p><p>To that end, the CISA's guiding principles are:</p><ul><li>Leadership and collaboration with infrastructure and security partners.</li><li>Risk prioritization to secure "national critical" functions underlying national security, economic security, public health and safety, and the continuity of government operations.</li><li>Results oriented to reduce risk, respond to partners' requirements, and work toward common outcomes. </li><li>Respect for national values such as civil liberties, free expression, commerce, and innovation.</li><li>Unified mission and agency to address risks in a coordinated, cross-agency manner.</li></ul><p><br></p><p>The document's subtitle, "defend today, secure tomorrow," lays out the agency's twin goals. By defend today, the CISA seeks to defend against urgent threats and hazards. The objectives are to prevent or mitigate most significant threats to federal government networks and critical infrastructure, mitigate the impact of "all-hazards" events, ensure incident response communication, and mitigate significant supply chain and emerging threats.</p><p>The secure tomorrow goal is about strengthening critical infrastructure and addressing long-term risks. The aim is to identify and manage risks to critical infrastructure, as well as to provide technical assistance.</p><p>The CISA seeks to achieve these goals through risk analysis, risk management planning, information sharing, capacity building, and incident response. Resources for delivering these services include:</p><ul><li>Analysts, risk models, and technical alerts.</li><li>Collaborative planning teams and task forces.</li><li>Policy and governance actions.</li><li>Technical assistance teams and security advisors.</li><li>Deployed tools and sensors.</li><li>Grants and operational contracts.</li><li>Exercises and training.</li></ul><p><br></p><p>The strategic intent document lays out Krebs' priorities for the agency:</p><ul><li>China, supply chain, and 5G technologies.</li><li>Election security.</li><li>Soft target security such as for crowded places.</li><li>Federal agency cybersecurity.</li><li>Industrial control systems such as transportation systems, telecommunication networks, industrial manufacturing plants, electric power generators, oil and natural gas pipelines, and the Internet of Things.</li></ul><p><br></p><p>Among the CISA's operations are the National Risk Management Center and the National Cybersecurity and Communications Integration Center, which provides incident response capabilities to all levels of government as well as the private sector. </p>Tim McCollum0
Wrangling the Internet of Things the Internet of Things<p>​The Internet of Things (IoT) allows businesses to connect everything from the office printer to factory production lines via Wi-fi, making it an ideal tool for organizations to exploit, and for employees to use effectively. And there appears to be no limit to what IoT technology is capable of delivering. </p><p>Because of how simple it is to install and use the associated software and applications on people’s smartphones and tablets, technology heavyweights like Cisco Systems and IT analysts such as Juniper Research estimate that the number of connected IoT devices will reach 50 billion worldwide in 2020. According to research by Forrester, businesses will lead the surge in IoT adoption this year, with 85% of large companies implementing IoT or planning deployments. </p><p>But such connectivity comes at a price. As IoT usage increases, so too do the associated risks. Simple devices rely on simple security, and simple protocols can be simply ignored. </p><p>A common problem is employees simply adding devices to the network, without informing the IT department — or without the IT team noticing. For example, Raef Meeuwisse, a UK-based cybersecurity consultant and information systems auditor, says that one security technology company revealed that when installing network security detection in new customer networks, it found that up to 40% of devices logged on to the network were IoT. “That was a surprise to those organizations’ executives and their IT departments,” he says.</p><p>Such anecdotes mean internal audit has a real job at hand to ensure that IoT deployments go smoothly and that the associated benefits are delivered. And the task is fraught with danger: The technology is still evolving, new risks are emerging, and controls to mitigate these risks often seem to be a step behind what is actually happening in the workplace.</p><h2>Warning Signs<br></h2><p>Information experts and standards-setters such as ISACA point out that because IoT has no universally accepted definition, there aren’t any universally accepted standards for quality, safety, or durability, nor any universally accepted audit or assurance programs. Indeed, IoT comes with warning notices writ large. According to ISACA’s State of Cybersecurity 2019 report, only one-third of respondents are highly confident in their cybersecurity team’s ability to detect and respond to current cyberthreats, including IoT usage — a worrying statistic given the proliferation of IoT devices. Industry experts and hackers have demonstrated how easy it is to target IoT-enabled office security surveillance systems and turn them into spy cameras to access passwords and confidential and sensitive information on employees’ computer screens (see “Targeting the IoT Within” below for examples of other IoT vulnerabilities). </p><p>Distributed denial of service attacks (DDoS) on IoT devices — which analysts and IT experts deem the most likely type of threat — are the best example of IoT device security and governance flaws. In 2016, the Mirai cyberattack on servers at Dyn, a company that controls much of the internet’s domain-name infrastructure, temporarily stalled several high-profile websites and online services, including CNN, Netflix, Reddit, and Twitter. Unique in that case was that the outages were caused by a DDoS attack largely made up of multiple, small IoT devices such as TVs and home entertainment consoles, rather than via computers infected with malware. These devices shared a common vulnerability: They each had a built-in username and password that could be used to install the malware and re-task it for other purposes. The attack was the most powerful of its type and involved hundreds of thousands of hijacked devices. </p><p>“As is often the case with new innovations, the use of IoT technology has moved more quickly than the mechanisms available to safeguard devices and their users,” says Amit Sinha, executive vice president of engineering and cloud operations at cloud security firm Zscaler in San Jose, Calif. “Enterprises need to take steps to safeguard these devices from malware attacks and other outside threats.”</p><h2>Begin With Security</h2><p>Events like the Mirai attack make security a priority for internal auditors to review. Among the top IoT security concerns that experts identify are weak default and password credentials, failure to install readily available security patches, loss of devices, and failure to delete data before using a new or replacement device. The steps to rectify such problems are relatively simple, but they are “usually ignored or forgotten about,” says Colin Robbins, managing security consultant at Nottingham, U.K.-based cybersecurity specialist Nexor. </p><p>As a starter, he says, internal auditors should check that the business has a process to ensure that all IoT device passwords are unique and cannot be reset to any universal factory default value to minimize the risk of hacking. The organization should update software and vulnerability patches regularly, and devices that cannot be updated — because of age, model, or operating system — should be isolated once personal and work data has been removed from them.</p><p>“Organizations need to have conversations at the highest level of management about what IoT means to the business,” says Deral Heiland, IoT research lead at Boston-based cybersecurity firm Rapid7. Once they have done this, Heiland suggests they focus on detailed processes around security and ask key questions such as: What IoT has the organization currently deployed? Who owns it? How does the organization manage patches for these technologies, and how does it monitor for intrusions? What processes does the organization need for deploying new technologies?</p><p>Technical Hygiene Standards Effective IoT security requires organ-izations to develop their own protocols and security specifications up front, Meeuwisse says. This ensures that “devices can either be integrated into particular security zones or quarantined and excluded from the possibility of getting close to anything of potential value,” he explains. </p><table class="ms-rteTable-default" width="100%" cellspacing="0"><tbody><tr><td class="ms-rteTable-default" style="width:100%;"><p>​<strong>Targeting the IoT Within</strong><br></p><p>In January 2017, the U.S. Food and Drug Administration issued a statement warning that certain kinds of implantable cardiac devices, such as pacemakers and defibrillators, could be accessed by malicious hackers. Designed to send patient information to physicians working remotely, the devices connect wirelessly to a hub in the patient’s home, which in turn connects to the internet over standard landline or wireless connections. Unfortunately, technicians found that certain transmitters in the hub device were open to intrusions and exploits. In a worst-case scenario, hackers could manipulate the virtual controls and trigger incorrect shocks and pulses, or even just deplete the device’s battery. Manufacturers quickly developed and deployed a software patch. </p><p>The case demonstrates the need for internal audit to check that Wi-fi networks are secure, that default factory settings on any connected devices are not used, and that the organization,  through the IT department, has patch management processes in place to check whether any devices have security updates that need to be installed.<br></p></td></tr></tbody></table><p>Meeuwisse adds that whether a business is manufacturing or simply installing IoT devices, having security architecture standards to ensure information security throughout the organization is aligned with business goals is a crucial first step. “Buying or designing technology before having a clear understanding of the security specification required is a dangerous path,” he says. “For any new type of IoT device, there should always be a risk assessment process in place to understand whether the device meets security requirements, needs more intensive scrutiny, or poses a significant potential risk.”</p><p>More widely, organizations need to examine “the basics” to ensure that they maintain their IT system’s “technical hygiene,” says Corbin Del Carlo, director, internal audit IT and infrastructure at financial services firm Discover Financial Services in Riverwoods, Ill. For example, Wi-fi access should be closed so only authorized and certified devices can use it, and there should be an inventory of devices that are connected to the network so the IT department knows who is using them. For additional security, IT should scan the network routinely — even daily — to check whether new devices have been added to the network and whether they have been approved. </p><p>Del Carlo also says internal auditors need to check that the organization’s IT architecture can support a potentially massive scale-up of devices wanting to access its systems and network quickly. “We’re talking about millions more devices all coming online within a year or two,” he says. “Can your IT system cope with that kind of increase in demand? What assurance do you have that the system won’t fail?”</p><p>Del Carlo recommends organizations draw up a shortlist of device manufacturers that are deemed secure enough and compatible with their IT architecture. “If you allow devices from any manufacturer to access the network, then you need the in-house capability to monitor the security of potentially hundreds of different makes and find security patches for them all, which can be very time-consuming,” he points out.<br></p><p>A list of approved manufacturers also can make it easier to audit whether the devices have the latest versions of security downloads. “Even if a particular manufacturer’s product proves to have vulnerabilities, it is much easier to fix the problem for all those devices than try to constantly monitor whether there are security updates for many different products made by dozens of manufacturers,” he says.</p><h2>Intrusive Monitoring</h2><p>It’s not only the organization’s security that internal auditors should consider. Auditors also should make management aware of potential privacy issues that some applications may present — especially those that feature GPS tracking, cameras, and voice recorders. “Tracking where employees are can be useful for delivery drivers, but is it necessary to track employees who are office-based?” Del Carlo asks. </p><p>An example is an IoT app that monitors how much time people spend at their desks and prompts them to take a break if they are there too long. Organizations could use that technology to monitor how frequently people are not at their desks, Del Carlo notes. “While this may catch out those who take extended lunch breaks, it may also highlight those who have to take frequent trips to the bathroom for medical conditions that they may wish to keep private,” he explains. “As a result, auditors should query such device usage.”</p><h2>Business Risks</h2><p>Yet while there is a vital need to make IoT security a priority, Robbins says organizations should not overlook whether management has appropriately scoped the business case for an IoT deployment, and how success or failure can be judged. “As with any other project, particularly around IT, managers can throw money at something they do not understand just because they think they need it, or because everyone else is using it,” he says. </p><p>Robbins cautions that poorly implemented IoT solutions create new vulnerabilities for businesses. “With IoT, it’s not data that is at risk, but business processes at the heart of a company,” he points out. “If these processes fail, it could lead to a direct impact on cost or revenue.”</p><p>According to Robbins, the success of IoT means a heavy — and “almost blind” — reliance on the rest of the “things” that support the technology working effectively within the supply chain. Take for example an IoT device that monitors bakery products made in an oven. That device may tell the operator that the oven temperature is 200 degrees and the baked goods have another 20 minutes of cooking time, he explains. </p><p>“But the problem is that you have no physical way of checking, or even being alerted, that the technology might be wrong or has been hacked, and that the settings and readings are incorrect,” Robbins says. “Everyone is relying on all the different parts of the supply chain — the app vendor, the cloud provider, and so on — maintaining security in a world where there are no agreed-upon standards or best practice. Talk about ‘blind faith.’” </p><p>IoT also increases the need for additional third-party and vendor risk monitoring, Del Carlo warns. This is because app developers not only may be collecting data from users to help inform design improvements but also to generate sales leads. </p><p>“Internal auditors need to think about the data that these vendors might be getting and how they may be using it,” Del Carlo explains. For example, developers may be exploiting user data to approach the organization’s competitors with products tailored to the competitor’s needs. “Internal auditors need to check what data developers may be collecting and why,” he advises.</p><h2>Early Best Practices</h2><p>Despite the absence of universally agreed-upon guidance for aligning IoT usage with business needs, some industry bodies have tried to promote what they consider to be either basic steps or best practice. For instance, in a series of blog posts, ISACA recommends that organizations perform pre-audit planning when considering investing in IoT solutions. It advises organizations to think about how the devices will be used from a business perspective, what business processes will be supported, and what business value is expected to be generated. ISACA also suggests that internal auditors question whether the organization has evaluated all risk scenarios and compared them to anticipated business value.</p><p>Eric Lovell, practice director for internal audit technology solutions at PwC in Charlotte, N.C., says internal audit should have a strong role in ensuring that IoT risks are understood and controlled, and that the technology is aligned to help achieve the organization’s business strategy. “Internal audit should ask a lot of questions about how the organization uses IoT, and whether it has a clear strategic vision about how it can use the technology and leverage the benefits from it,” he says.</p><p>As IoT is part of the business strategy, Lovell says internal auditors need to assess the business case for it. “Internal auditors need to ask management about the business benefits it sees from using IoT, such as improving worker safety, better managing assets, or generating customer insights, and how these benefits are going to be measured and assessed to ensure that they have been realized,” he advises.</p><p>Questions to ask include: What metrics does the organization have in place to gauge success or failure? Are these metrics in line with industry best practice? Are there stage gates in place that would allow the organization to check progress at various points and make changes to the scope or needs of the project? “Equally importantly, does the organization have the right people with the necessary skills, experience, and expertise to check that the technology is delivering its stated aims and is being used securely?” Lovell notes.</p><p>Lovell also says internal auditors need a seat at the table from the beginning when the organization embarks on an IoT strategy. “Like with any other project, internal audit will have less influence and input if the function joins the discussion after the project has already been planned, scoped, and started,” he explains. “Internal auditors need to make sure that they are part of those early discussions to gauge management’s strategic thinking and their level of awareness of the possible risks and necessary controls and procedures.”</p><h2>IoT’s Dynamic Risks</h2><p>Risks shift over time as technology innovations and the business and regulatory environment evolve. “It is pointless to think that the risks that you have identified with IoT technologies at the start of the implementation process will remain the same a couple of years down the line,” Lovell says. “Internal auditors need to constantly review how IoT is being used — and under what circumstances and by whom — and assess whether the technology is still fit for purpose to meet the needs of the business.” <br></p>Neil Hodge1
A Change in Mindset Change in Mindset<h3>​How far have audit functions come in terms of data analytics usage?</h3><p><strong>Petersen</strong> Progressing audit analytics is a journey that doesn’t have an end, but I’m excited to hear organizations describe how they continue to progress year over year. These organizations know the direction they need to go, continue to raise the bar for themselves, and set new objectives to achieve. They face the same resource limitations many audit teams do, so they encourage all their auditors to progress, not just those assigned as the data analytics expert. </p><p><strong>Zitting</strong> Not far enough. Recently, my company’s State of the GRC Profession survey revealed 43% of professionals want to grow their data analysis skills, but those figures have been the same for years — if not decades. Leading audit teams that are willing to embrace change and take risks are indeed creating a new future by delivering and sharing successes in data analysis, advanced analytics, robotic process automation, and even machine learning/artificial intelligence; unfortunately, these leaders are the exception. They inspire us, yet other corporate functions like marketing, IT/digital transformation, security, and even risk management are leaving internal audit behind. <br></p><h3>What are examples, beyond typical usages, of analytics that auditors should be undertaking?<strong style="color:#666666;font-family:arial, helvetica, sans-serif;font-size:12px;"> </strong></h3><p><strong><img src="/2019/PublishingImages/Dan-Zitting.jpg" class="ms-rtePosition-1" alt="" style="margin:5px;" />Zitting</strong> Let’s not write off the “typical usages” of data analytics, because the vast majority of audit teams aren’t even doing those. The key control areas that virtually every organization’s audit and internal control teams test are completely automatable, yet few seem to do it. Areas like user access, IT administrator activity (or other activity log testing), journal entry, payment, and payroll should never again be tested with anything but data analytics.</p><p>Beyond that, the universe of possibility for the data-savvy audit team is limitless. I’m seeing leading audit teams even turn analytics in on themselves — like doing textual analytics on the text of the past several years’ audit findings to indicate where risk is increasing or not being addressed. It’s incredibly impactful. I’ve also seen practitioners develop analytics that use machine learning to create “hot clusters” of employees that are at high risk of churn, or to see “hot clusters” of payments that could be bribes, money laundering, or sanction violations. </p><p><strong>Petersen</strong> How about running data analysis on the audit analytics program? Start by ascertaining how many audits contain some level of data analysis — sampling doesn’t count. Now compare that to how many should contain some analysis. I don’t know of any organizations that would find they should be doing analytics on 100% of their audits, but if they are honest, they’ll find a significant gap between those audits that could have some analytics performed and those that do.</p><p>Now that we have determined breadth of coverage, let’s determine depth of coverage. This is done by determining for each of those audits that could have analytics performed on them, the analytics that would ideally be performed. Internal audit should focus on those analytics it would be proud to report to the audit committee that it performed considering the risks and audit objective. Don’t be discouraged by the thought that internal audit can never achieve the coverage it has identified. Instead, plan to increase coverage each year.<br></p><h3>How can small audit functions that can’t afford a data scientist jump into data analytics?<br></h3><p><strong><img src="/2019/PublishingImages/Ken-Petersen.jpg" class="ms-rtePosition-1" alt="" style="margin:5px;" />Petersen</strong> Start with basic analytics functions. Audit leadership needs to lead the organization to continually progress the analytics being performed. Leverage those individuals in your organization that have an aptitude for analytics and communicate within the team successes, new ideas, and new ways of doing things. Use known tools such as Excel and easy-to-use and learn audit analytics tools. Leverage existing audit techniques across different types of audits. For example, testing for duplicate payments, separation of duties violations, and several other routines apply across many types of audits. Once you’ve determined how to identify these in one audit, this can be applied to other audits. Teams without a data scientist can still have a strong audit analytics program.</p><p><strong>Zitting</strong> Every audit function that can hire a single auditor can afford a person with data skills. The problem is that we accept the status quo of the short-term demands of internal audit’s stakeholders; thus, we elect to hire a “traditional” auditor over a person with technical data skills and the ability to think critically. Obviously, that is a necessity in real life, but also it illustrates that the “can’t afford” or “can’t find the skills” arguments are basically bad excuses that abdicate our responsibility as corporate leaders to evolve with the economic demands of the modern environment. Consider a complete shift in mindset. What if we were building a small data science team that had some audit skills instead of a small audit team with some data skills? Wouldn’t that change our perspective on staffing for a truly modern form of auditing?<br></p><h3>What skills should audit functions be looking for when hiring a data analytics expert?</h3><p><strong>Zitting</strong> Most importantly, audit functions should be looking for critical thinking skills. Technical skills in data analytics can be taught. What is difficult to teach is critical thinking, particularly as it relates to knowledge of audit process/risk assessment/internal control, knowledge of the business and its strategy/operations, and the ability to navigate corporate access challenges — access to data and executive time — by asking really smart questions. Next, look for an understanding and desire to work in an Agile mindset. Specific tools and approaches will always change, but if the candidate understands Agile methodology — minimum viable product, sprints and iteration, continuous improvement — he or she will be able to deliver business results in both the short and long term regardless of issues of tool preference. </p><p><strong>Petersen</strong> Communication and collaboration skills can exponentially increase the team’s analytics effectiveness. Without these skills, there is one expert off doing analytics by him or herself. However, with these skills and easy-to-use analytics tools, the expert can guide the entire team through its analytics needs, greatly increasing the overall effectiveness of the team. When not providing this guidance, the expert can work on more complex analytical projects. This approach also increases employee satisfaction of both the expert and the other team members.<br></p><h3>What does a best-in-class audit function that is fully embedded in data analytics look like?</h3><p><strong>Petersen</strong> These teams apply a quantitative analysis and measurement to their audit analytics. They do this by measuring the depth and breadth of their analytics coverage. They have strong leaders who promote the value of analytics and make it a part of the team’s culture. They also understand that there is no finish line, but the analytics program will continually evolve and grow. Leaders of these teams incorporate all team members into the analytics process, understanding that some have a stronger aptitude for it than others, but still expecting all to participate, and they set appropriate analytics goals for each. Not only are organizations like this best-in-class with respect to the analytics functions but, as a surprise to some, they also have happier team members.</p><p><strong>Zitting</strong> The best audit organizations already are demonstrating that their core skill is data analysis. It’s the only way to get large-scale insight on risk, control, and assurance across globally dispersed organizations using constrained resources. Best-in-class audit functions don’t embed data analytics, they provide 90% of all assurance they report through analytics and reserve “traditional” auditing for manual deep dives into areas of significant risk or deviation from policy, regulation, or other standards of control. For example, one of our clients moved its entire internal audit team into the core business operation and began rebuilding internal audit from scratch in the last two years. This was because audit was providing so much value via its complete focus on data and analytics, the business demanded to consume the function, and the audit committee agreed to rebuild. That’s one example of internal audit driving real value through a data-centric mindset and practice.<br></p>Staff1
Editor's Note: Fortress in the Cloud's Note: Fortress in the Cloud<p>​Cloud computing has quickly soared to become a dominant business technology. Public cloud adoption, in fact, now stands at 91% among organizations, according to software company Flexera's State of the Cloud Survey. And it's only expected to grow from there. Analysts at Gartner say more than half of global enterprises already using the cloud will have gone all-in by 2021. </p><p>Collectively, that places a lot of responsibility for organizational data outside the enterprise. And while cloud migration can lead to significant efficiencies and cost savings, the potential risks of third-party data management cannot be ignored. Reuters, for example, recently reported that several large cloud providers were affected by a series of cyber intrusions suspected to originate in China. Victims, Reuters reports, include Computer Sciences Corp., Fujitsu, IBM, and Tata Consultancy Services. The news agency's chilling quote from Mike Rogers, former director of the U.S. National Security Agency, emphasizes the gravity of these breaches: "For those that thought the cloud was a panacea, I would say you haven't been paying attention." </p><p>As noted in this issue's cover story, <a href="/2019/Pages/Security-in-the-Cloud.aspx">"Security in the Cloud,"</a> growing use of cloud services creates new challenges for internal auditors. Writer Arthur Piper, for example, points to issues arising from the cloud's unique infrastructure and the "lack of visibility of fourth- and fifth-level suppliers." He also cites the cloud's opaque nature and rapid pace of development as potential areas of difficulty. Addressing these issues, he says, requires internal audit to work with a wide range of business stakeholders — especially those in IT — and to secure staff with the right type of expertise.</p><p>The need to focus on these areas is supported by a recent report from the Internal Audit Foundation, Internal Auditors' Response to Disruptive Innovation. Among practitioners surveyed for the research, a consistent theme emerged with regard to cloud computing — to be successful, internal audit should build relationships with IT, before moving to the cloud. Multiple respondents also recommend bringing in personnel with specialized IT skills to facilitate the evaluation of cloud controls. Moreover, they noted the importance of evaluating not only standard internal controls in areas like data security and privacy, but soft controls, such as institutional knowledge, as well.</p><p>Of course, cloud computing is only the tip of the iceberg when it comes to challenges around disruptive technology. Among other IT innovations affecting practitioners, artificial intelligence and the Internet of Things are equally impactful. We examine each of these areas in <a href="/2019/Pages/Stronger-Assurance-Through-Machine-Learning.aspx">"Stronger Assurance Through Machine Learning"</a> and <a href="/2019/Pages/Wrangling-the-Internet-of-Things.aspx">"Wrangling the Internet of Things,"</a> respectively. And be sure to visit the <a href="/_layouts/15/FIXUPREDIRECT.ASPX?WebId=85b83afb-e83f-45b9-8ef5-505e3b5d1501&TermSetId=2a58f91d-9a68-446d-bcc3-92c79740a123&TermId=75ea3310-ffa9-45b9-9280-c69105326f09">Technology section</a> of our website,, for insights and perspectives on other IT-related developments affecting the profession.</p>David Salierno0

  • IIA AuditBoard_Nov 2019_Premium 1
  • IIA GAM_Nov 2019_Premium 2
  • IIA OnRisk_Nov_Premium 3