OECD Releases Guidelines for Development of Trustworthy AI, Joining the Pack

1668

Australia is among 42 countries that in late May signed up to a new set of policy guidelines for the development of artificial intelligence (AI) systems.

Yet Australia has its own draft guidelines for ethics in AI out for public consultation, and a number of other countries and industry bodies have developed their own AI guidelines, according to an account in Phys.org.

So why do we need so many guidelines, and are any of them enforceable?

The latest set of policy guidelines is the Recommendation on Artificial Intelligence from the Organisation for Economic Co-operation and Development (OECD). It promotes five principles for the responsible development of trustworthy AI. Given this comes from the OECD, it treads the line between promoting economic improvement and innovation and fostering fundamental values and trust in the development of AI.

The five AI principles encourage:

  1. inclusive growth, sustainable development and well-being
  2. human-centered values and fairness
  3. transparency and explainability
  4. robustness, security and safety
  5. Accountability.

These recommendations are broad and do not carry the force of laws or even rules. Instead they seek to encourage member countries to incorporate these values or ethics in the development of AI.

But what do we mean by AI?

AI is not one thing with a single application that poses singular risks or threats.

Instead, AI has become a blanket term to refer to a vast number of different systems. Each is typically designed to collect and process data using computing technology, adapt to change, and act rationally to achieve its objectives, ultimately without human intervention.

Narrow AI is good at a specific task, such as playing chess. General AI, the ultimate goal of some AI developers, aims to replace human intelligence in many tasks. It this idea of general AI that drives many of the fears and misconceptions that surround AI.

Ethics Guidelines are Many

Responding to these fears and a number of very real problems with narrow AI,the OECD recommendations are the latest of a number of projects and guidelines from governments and other bodies around the world that seek to instill an ethical approach to developing AI.

These include initiatives by the Institute of Electrical and Electronics Engineers, the French data protection authority, the Hong Kong Office of the Privacy Commissioner and the European Commission.

The Australian government funded CSIRO’s Data61 to develop an AI ethics framework, which is now open for public feedback, and the Australian Council of Learned Academies is yet to publish its report on the future of AI in Australia.

The Australian Human Rights Commission, together with the World Economic Forum, is also reviewing and reporting on the impact of AI on human rights.

The aim of these initiatives is to encourage or to nudge ethical development of AI. But this presupposes unethical behaviour. What is the mischief in AI?

Examples of Unethical AI

One study identified three broad potential malicious uses of AI. These target:

  • digital security (for example, through cyber-attacks)
  • physical security (for example, attacks using drones or hacking)
  • political security (for example, if AI is used for mass surveillance, persuasion and deception).

One area of concern is evolving in China, where several regions are developing a social credit system linked to mass surveillance using AI technologies

The system can identify a person breaching social norms(such as jaywalking, consorting with criminals, or misusing social media) and debit social credit points from the individual.

When a credit score is reduced, that person’s freedoms (such as the freedom to travel or borrow money) are restricted. While this is not yet a nationwide system, reports indicate this could be the ultimate aim.

Added to these deliberate misuses of AI are several unintented side effects of poorly constructed or implemented narrow AI. These include bias and discrimination and the erosion of trust.

Read the source article in Phys.org.