Getting Started With AI Ethics

Posted by | Last Updated on | Estimated Reading Time: 9 minutes

Artificial Intelligence and the role of AI Ethics

The world of Artificial Intelligence is constantly changing, and the rate of change is not linear… you may not have noticed but it’s growing at a logarithmic rate, which basically means its accelerating faster and faster on an annual basis.Some would argue that its potentially out of control and we need to put the brakes on, other suggest that we just let it run and see where it goes.

Many people have fears that not all AI improvements are necessarily beneficial to humankind or in societies best interest. For example, there is the potential to weaponize AI, to intrude in our lives and privacy by listening and watching everything and identification without consent with covert AI systems. There is the potential to have control of our personal data taken away from us so how should AI be managed and controlled?

Let’s consider the ethical case…

Firstly, what are ethics?

Ethics are loosely linked to morals; in fact many people use them interchangeably. Basically, both ethics and morals relate to “right” and “wrong” conduct, however…

Morals tend to refer to an individual's own personal principles regarding right and wrong.

Ethics generally refer to rules provided by an external source or community, e.g., codes of conduct in the workplace or an agreed code of ethics for a profession like medicine or law.

Ethics as a field of study is centuries old and centres on questions like:

what is a good’ action,

what is right’,

and in some instances, ‘what is the good life’.

The role of AI Ethics

We all want beneficial Ai systems that we can trust so the achievement of trustworthy AI draws heavily on the field of ethics.

AI Ethics could be considered a sub-field of applied ethics and technology and focuses on the ethical issues raised by the design, development, implementation and use of AI technologies.

The goal of AI ethics is therefore to identify how AI can advance or raise concerns to the good life of individuals, whether this be in terms of quality of life, mental autonomy or freedom to live in a democratic society.

It concerns itself with issues of diversity and inclusion (with regards to training data and the ends to which AI serves) as well as issues of distributive justice (who will benefit from AI and who will not).

So what is already in place?

There is already a lot of legislation in place like the Human Rights Act and Data Protection legislation which protect our basic human rights but often this is abused by the state and private organisations alike, so why should we trust the creators of AI systems and services to operate responsibly?

AI Ethics – so who is doing what…

There are several bodies currently working on codes of ethics around Artificial Intelligence. The two key bodies are 'The EU' and 'The Future of Life Institute'.

The European Union - The Ethics Guidelines for Trustworthy AI (EU founded)

European Commission Directorate-General for Communication

AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) is developing guidelines for trustworthy Artificial Intelligence.

It states that trustworthy AI basically has two components:

(1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and

(2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

It also considers:

Rights - A collection of entitlements for which a person may have, and are protected by government and the courts.

Values - Ethical ideals or beliefs for which a person has enduring preference and determine our state of mind and act as a motivator.

Principles - A fundamental well settled rule of law or standard for good behaviour, or collectively they are our moral or ethical standards.

The latest revision of the EU Ethical guidance on AI was produced in March 2019: Here is a link to allow you to access the report. You will need to create your own login.

https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_draft_ethics_guidelines_18_december.pdf

The Future of Life Institute (US founded)

Developed the Asilomar Principals https://futureoflife.org/ai-principles/ at its ‘Beneficial AI conference 2017’ developed' the Asilomar Principals for AI'.There are 23 principals relating to:

Research - Goals, Funding, Policy, Cultures, Race Avoidance {speed of progress}

Ethics and Values - Safety, Failure Transparency, Judicial Transparency, Responsibility, Value Alignment, Human Values, Personal Privacy, Liberty and Privacy, Shared Benefit, Shared Prosperity, Human Control, Non-subversion, AI Arms Race

Longer term Issues - Capability Caution, Importance, Risks, Recursive Self-improvement, Common Good

Individual organisations

Many organisations are in the process of developing their own ‘Ethics for AI’.

Google is a good example of an organisation which decided to create an in-house AI Ethics board only to hit problems within a week of its launch.

The newly formed AI ethics board. Google's Advanced Technology External Advisory Council (ATEAC), was supposed to oversee its work on artificial intelligence and ensure it doesn't cross any lines and be dedicated to “the responsible development of AI”. It was however dissolved after more than 2,000 Google workers signed a petition criticising the company’s selection of an anti-LGBT advocate

What should you be doing?

In the same way that we manage health and safety or compliance within our organisations if we are working with AI or developing AI systems and services then we should adopt specific AI principles and values, for example:

  • The Principle of Beneficence: “Do Good”
  • The Principle of Non maleficence: “Do no Harm”
  • The Principle of Autonomy: “Preserve Human Agency”
  • The Principle of Justice: “Be Fair”
  • The Principle of Explicability: “Operate transparently”

These principles and values need to be applied to our individual use cases, for example are we developing autonomous vehicles, personal care systems, for using AI to decide insurance rates.

Achieving Trustworthy AI means that the general and abstract principles documented above need to be mapped into concrete requirements for AI systems and applications.

The ten requirements documented below have been derived from the rights, principles and values detailed previously. While they are all equally important, in different application domains and industries, the specific context needs to be considered for further handling thereof.

The requirements for Trustworthy AI are in alphabetical order, to stress the equal importance of all requirements. Later we provide an Assessment List to support the operationalisation on these requirements.

1. Accountability

2. Data Governance

3. Design for all

4. Governance of AI Autonomy (Human oversight)

5. Non-Discrimination

6. Respect for (& Enhancement of) Human Autonomy

7. Respect for Privacy

8. Robustness

9. Safety

10. Transparency

We can use both technical and non-technical methods to achieve trustworthy AI systems. For example, we have:

Technical methods

  • Ethics & Rule of law by design (X-by-design)
  • Architectures for Trustworthy AI
  • Testing & Validating
  • Traceability & Auditability
  • Explanation (XAI research)
  • CE Mark

Non-Technical Methods

  • Regulation
  • Standardization
  • Accountability Governance
  • Codes of Conduct
  • Education and awareness to foster an ethical mind-set
  • Stakeholder and social dialogue
  • Diversity and inclusive design teams

And how do we check and confirm that we have hit our AI Ethics principles.We do it through:

  • Accountability
  • Data governance
  • Non-discrimination
  • Design for all
  • Governing AI autonomy
  • Respect for Privacy
  • Respect for (& Enhancement of) Human Autonomy
  • Robustness
  • Reliability & Reproducibility
  • Accuracy through data usage and control
  • Fall-back plan
  • Safety
  • Transparency
  • Purpose:
  • Traceability: Method of building the algorithmic system, Method of testing the algorithmic system

So, in Summary…

Here is a very simple mental checklist

  • Adopt an assessment list for Trustworthy AI
  • Adapt an assessment list to your specific use case
  • Remember the assessment list will never be exhaustive
  • Trustworthy AI is not about ticking boxes
  • Trustworthy AI is about improved outcomes through the entire life-cycle of the AI system.

Rather than give you all the answers which isn’t possible to achieve in a simple blog we hope that we have given you some food for thought and started you on the road to developing your own code of AI ethics or AI Ethics Board.

We hope that you can join us on one of our AI training courses where we can discuss the role and necessity for AI Ethics in much greater depth. Why not take a look…

Our 1-day BCS AI Essentials training course

Our 3-day BCS AI Foundation training course