For artificial intelligence (AI) to reach its potential, people must be able to trust it, according to Angel Gurría, secretary general of the Organisation for Economic Co-operation and Development (OECD). Speaking this month at the
London Business School, Gurría noted that human-centered AI could increase productivity and foster "inclusive human progress." In the wrong hands, though, it could be misused.
"Ethics is the starting point," he said, "that divides what you should do and what you should not do with this kind of knowledge and information and technology."
The OECD is among several organizations and government bodies that are raising questions and issuing proposals about how AI can be implemented ethically. For most of these organizations, the key word is
trust. But calls for ethical AI may be falling on deaf ears among businesses.
Businesses Aren't Worried
In a survey of more than 5,300 employers and employees in six countries, nearly two-thirds of employers say their organization would be using AI by 2022. However, 54% of employers say they aren't concerned that the organization could use AI unethically, according to the study by Genesys, a customer-experience company in San Francisco. Similarly, 52% aren't worried that employees would misuse AI.
Just over one-fourth of these employers are concerned about future liability for "unforeseen use of AI," the company notes in a press release. Currently, only 23% of employers have a written policy for using AI and robots ethically. Among employers that lack a policy, 40% say their organization should have one.
"We advise companies to develop and document their policies on AI sooner rather than later," says Merijn te Booij, chief marketing officer at Genesys. Those organizations should include employees in the process, te Booij advises, "to quell any apprehension and promote an environment of trust and transparency."
That word again: trust.
"Trust is still foundational to business," writes Iain Brown, head of Data Science at SAS UK and Ireland, this month on
TechRadar. Brown says one-fourth of consumers will act if they think an organization doesn't respect or protect their data.
Despite laws such as the European Union's (EU's) General Data Protection Regulation, consumers may expect greater transparency than current regulations stipulate — particularly where "data meets AI," Brown says. He advises asking three questions to determine whether the organization is using AI ethically:
- Do you know what the AI is doing?
- Can you explain it to customers?
- Would customers respond happily when you tell them?
Governments Propose Guidelines
Building ethical, trustworthy AI is at the core of the several plans, guidelines, and research initiatives sponsored by governments and nongovernmental organizations. In April, the European Commission issued
Ethics Guidelines for Trustworthy Artificial Intelligence based on the idea that AI should be lawful, ethical, and robust. The OECD followed that in May by releasing principles for responsible stewardship of trustworthy AI.
The European Commission guidelines set out seven requirements for trustworthy AI:
- AI should empower people and have appropriate oversight.
- AI systems should be resilient and secure.
- AI should protect privacy and data and be under adequate data governance.
- Data, system, and AI business models should be transparent.
- AI should avoid unfair bias.
- AI should be sustainable and environmentally friendly.
- Mechanisms should be in place — including auditability — to ensure responsibility and accountability over AI.
The European Commission recently launched a
pilot test of its guidelines. It includes an online survey — open until Dec. 1 — and interviews with select public- and private-sector organizations.
Another aspect of the pilot phase are recommendations for EU and national policy-makers from the European Commission's High-Level Expert Group on Artificial Intelligence. AI that respects privacy, provides transparency, and prevents discrimination "can become a real competitive advantage for European businesses and society as a whole," says Mariya Gabriel, European Commissioner for Digital Economy and Society.
Additionally, France, Germany, and Japan have raised $8.2 million to fund research into human-centered AI,
Inside Higher Ed reports. The research would focus on the democratization of AI, the integrity of data, and AI ethics.
Meanwhile in the U.S., the National Institute for Standards and Technology (NIST) has released a plan aimed at developing AI-related technical standards and tools. Such standards are needed to promote innovation as well as public trust in AI technologies, NIST says.
To those ends,
U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (PDF) recommends bolstering AI standards-related knowledge and coordination among federal government agencies. It also calls for promoting research into how trustworthiness can be incorporated into AI standards and tools. Moreover, it advocates using public-private partnerships to develop standards and tools, and working with international parties to advance them.
Trust With a Capital "T"
The varying strands of AI ethics development are in the early stages, though. Meanwhile, the technology is advancing well ahead of any standards. In his speech in London, OECD's Gurría said AI can benefit society if people have the tools and the tools can be trusted. "Artificial intelligence can help us if we apply it well," he said.