SPECIAL OFFER: ITIL® 4 Specialist: Monitor, Support & Fulfil (MSF) | 29 - 31 May 2024 | Live Virtual Class With Trainer | NOW £1395 + VAT (£100 OFF RRP) Learn more

Getting Started With AI Ethics

Posted by | Reviewed by | Last Updated on | Estimated Reading Time: 9 minutes

The world of Artificial Intelligence is constantly changing, and the rate of change is not linear… you may not have noticed but it’s growing at a logarithmic rate, which basically means it's accelerating faster and faster on an annual basis. Some would argue that it's potentially out of control and we need to put the brakes on, others suggest that we just let it run and see where it goes.

Many people have fears that not all AI improvements are necessarily beneficial to humankind or in society's best interest. For example, there is the potential to weaponize AI, to intrude in our lives and privacy by listening and watching everything and identification without consent with covert AI systems. There is the potential to have control of our personal data taken away from us so how should AI be managed and controlled?

Let’s consider the ethical case…

Firstly, What Are Ethics?

Ethics are loosely linked to morals; in fact, many people use them interchangeably. Basically, both ethics and morals relate to “right” and “wrong” conduct, however…

Morals tend to refer to an individual's own personal principles regarding right and wrong.

Ethics generally refer to rules provided by an external source or community, e.g., codes of conduct in the workplace or an agreed code of ethics for a profession like medicine or law.

Ethics as a field of study is centuries old and centers on questions like:

what is a good’ action,

what is right’,

and in some instances, ‘what is the good life’.

The Role Of AI Ethics

We all want beneficial Ai systems that we can trust so the achievement of trustworthy AI draws heavily on the field of ethics.

AI Ethics could be considered a sub-field of applied ethics and technology and focuses on the ethical issues raised by the design, development, implementation and use of AI technologies.

The goal of AI ethics is, therefore, to identify how AI can advance or raise concerns to the good life of individuals, whether this be in terms of quality of life, mental autonomy or freedom to live in a democratic society.

It concerns itself with issues of diversity and inclusion (with regards to training data and the ends to which AI serves) as well as issues of distributive justice (who will benefit from AI and who will not).

So, What Is Already In Place?

There is already a lot of legislation in place like the Human Rights Act and Data Protection legislation which protects our basic human rights but often this is abused by the state and private organisations alike, so why should we trust the creators of AI systems and services to operate responsibly?

AI Ethics – So Who Is Doing What?

There are several bodies currently working on codes of ethics around Artificial Intelligence. The two key bodies are 'The EU' and 'The Future of Life Institute'.

The European Union - The Ethics Guidelines For Trustworthy AI (EU Founded)

European Commission Directorate-General for Communication

AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) is developing guidelines for trustworthy Artificial Intelligence.

It states that trustworthy AI basically has two components:

(1) it should respect fundamental rights, applicable regulation, and core principles and values, ensuring an “ethical purpose” and

(2) it should be technically robust and reliable since, even with good intentions, a lack of technical mastery can cause unintentional harm.

It also considers:

Rights - A collection of entitlements for which a person may have, and are protected by the government and the courts.

Values - Ethical ideals or beliefs for which a person has enduring preference and determine our state of mind and act as a motivator.

Principles - A fundamental well-settled rule of law or standard for good behavior, or collectively they are our moral or ethical standards.

The latest revision of the EU Ethical guidance on AI was produced in March 2019: Here is a link to allow you to access the report. You will need to create your own login.

https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_draft_ethics_guidelines_18_december.pdf

The Future Of Life Institute (US Founded)

Developed the Asilomar Principals https://futureoflife.org/ai-principles/ at its ‘Beneficial AI conference 2017’ developed' the Asilomar Principals for AI'.There are 23 principals relating to:

Research - Goals, Funding, Policy, Cultures, Race Avoidance {speed of progress}

Ethics & Values - Safety, Failure Transparency, Judicial Transparency, Responsibility, Value Alignment, Human Values, Personal Privacy, Liberty and Privacy, Shared Benefit, Shared Prosperity, Human Control, Non-subversion, AI Arms Race

Longer-Term Issues - Capability Caution, Importance, Risks, Recursive Self-improvement, Common Good

Individual Organisations

Many organisations are in the process of developing their own ‘Ethics for AI’.

Google is a good example of an organisation that decided to create an in-house AI Ethics board only to hit problems within a week of its launch.

The newly formed AI Ethics board. Google's Advanced Technology External Advisory Council (ATEAC), was supposed to oversee its work on artificial intelligence and ensure it doesn't cross any lines and be dedicated to “the responsible development of AI”. It was however dissolved after more than 2,000 Google workers signed a petition criticising the company’s selection of an anti-LGBT advocate

What Should You Be Doing?

In the same way that we manage health and safety or compliance within our organisations if we are working with AI or developing AI systems and services then we should adopt specific AI principles and values, for example:

  • The Principle of Beneficence: “Do Good”
  • The Principle of Non maleficence: “Do no Harm”
  • The Principle of Autonomy: “Preserve Human Agency”
  • The Principle of Justice: “Be Fair”
  • The Principle of Explicability: “Operate transparently”

These principles and values need to be applied to our individual use cases, for example, are we developing autonomous vehicles, personal care systems, for using AI to decide insurance rates.

Achieving Trustworthy AI means that the general and abstract principles documented above need to be mapped into concrete requirements for AI systems and applications.

The ten requirements documented below have been derived from the rights, principles, and values detailed previously. While they are all equally important, in different application domains and industries, the specific context needs to be considered for further handling thereof.

The requirements for Trustworthy AI are in alphabetical order, to stress the equal importance of all requirements. Later we provide an Assessment List to support the operationalisation on these requirements.

1. Accountability

2. Data Governance

3. Design for all

4. Governance of AI Autonomy (Human oversight)

5. Non-Discrimination

6. Respect for (& Enhancement of) Human Autonomy

7. Respect for Privacy

8. Robustness

9. Safety

10. Transparency

We can use both technical and non-technical methods to achieve trustworthy AI systems. For example, we have:

Technical Methods

  • Ethics & Rule of law by design (X-by-design)
  • Architectures for Trustworthy AI
  • Testing & Validating
  • Traceability & Auditability
  • Explanation (XAI research)
  • CE Mark

Non-Technical Methods

  • ️Regulation
  • Standardization
  • Accountability Governance
  • Codes of Conduct
  • Education and awareness to foster an ethical mind-set
  • Stakeholder and social dialogue
  • Diversity and inclusive design teams

And how do we check and confirm that we have hit our AI Ethics principles? We do it through:

  • ️Accountability
  • Data governance
  • Non-discrimination
  • Design for all
  • Governing AI autonomy
  • Respect for Privacy
  • Respect for (& Enhancement of) Human Autonomy
  • Robustness
  • Reliability & Reproducibility
  • Accuracy through data usage and control
  • Fall-back plan
  • Safety
  • Transparency
  • Purpose:
  • Traceability: Method of building the algorithmic system, Method of testing the algorithmic system

In Summary…

Here is a very simple mental checklist

  • ️Adopt an assessment list for Trustworthy AI
  • Adapt an assessment list to your specific use case
  • Remember the assessment list will never be exhaustive
  • Trustworthy AI is not about ticking boxes
  • Trustworthy AI is about improved outcomes through the entire life-cycle of the AI system.

Rather than give you all the answers which isn’t possible to achieve in a simple blog we hope that we have given you some food for thought and started you on the road to developing your own code of AI ethics or AI Ethics Board.

We hope that you can join us on one of our AI training courses where we can discuss the role and necessity for AI Ethics in much greater depth. Why not take a look…

BCS Artificial Intelligence (AI) Essentials Certificate

BCS Artificial Intelligence (AI) Foundation Certificate

About The Author

Steve Lawless

Steve Lawless

I've worked in IT for over forty years and spent the last twenty in training and consultancy roles. Since starting Purple Griffon in 2002 I've taught over three thousand individuals in a variety of subjects. I hold qualifications in all four versions of ITIL®, ITAM, UX, BRM, SLM, SIAM, VeriSM, and AI, and co-authored the BCS AI Foundation book. Outside of work, I enjoy skiing (or rather falling over at high speed), reading, science and technology, and spending time with my loved ones.

Tel: +44 (0)1539 736 828

Did You Find This Post Useful?

Sign up to our newsletter to receive news about sales, discounts, new blogs and the latest IT industry updates.

(We will never share your data, and will never spam your inbox).

* Fields Required