AI Ethics and Governance in Practice

Image of the PDF
Download this summary for printing

At a Glance

This module ntroduces fundamental concepts of artificial intelligence (AI), responsible research and innovation (RRI), and AI ethics and governance.

  • An AI lifecycle model that centers the sociotechnical aspect of design and use practices.
  • The CARE and Act Framework for building an RRI culture of good practice.
  • The Process-Based Governance (PBG) Framework to ensure end-to-end accountability across the AI workflow and provide a template for documenting necessary governance actions

Workbook Summary

AI systems may have transformative and long-term effects on individuals and society. To manage these impacts responsibly and direct the development of AI systems toward optimal public benefit, considerations of AI ethics and governance must be a first priority.

In this workbook, we introduce and describe our PBG Framework, a multitiered governance model that enables project teams to integrate ethical values and practical principles into their innovation practices and to have clear mechanisms for demonstrating and documenting this.

Key Concepts

Image of people in connected clouds
AI System

Any computational or software-based system (or a combination of such systems) that uses methods derived from statistics, other mathematical techniques, or rule-based approaches to carry out tasks that are commonly associated with, or would otherwise require, human intelligence.

Image of papers on data science
Responsible Research and Innovation (RRI)

A model for reflecting on, anticipating, and deliberating about the ethical and social questions that arise in the development of AI systems. RRI provides methods for identifying and evaluating potential impacts of AI technologies and addressing challenges.

Defining machine learning
Machine Learning (ML)

A popular approach to AI that uses training data to build algorithmic models which find patterns in and draw inferences from that data. When training is completed, ML models can then ingest new or unseen data to predict outcomes for particular instances.

Person holding male gender symbol
AI Ethics

AI ethics tackles the social and moral implications of the production and use of AI technologies. It explores the values, principles, and governance mechanisms needed to ensure the responsible and trustworthy design, development, deployment, and maintenance of AI systems.

 

Picture showing stages of a generic process
Sociotechnical Aspect of AI

AI projects are affected by the interconnected relation between AI technologies and the social environments in which their development and use is embedded. Both elements interact and influence each other. A sociotechnical approach treats AI systems as both social and technical constructs.

CARE Act, spelt out in blocks
CARE and Act Framework

A tool for sense-checking and reflecting on the values, purposes, and interests that steer AI/ML projects, as well as projects’ real-world implications. This involves considering context, anticipating impacts, reflecting on purpose, engaging inclusively, and acting transparently and responsibly.

 

CARE and Act Framework

C

Consider Context

Think about the conditions and circumstances surrounding your AI project.

A

Anticipate Impacts

Describe and analyse the impacts, intended or not, that might arise from your project.

R

Reflect on Purposes, Positionality, and Power

Reflect on the goals of and motivations for the project; Scrutinise perspectival limitations; Reflect on the power imbalances

E

Engage Inclusively

Open up such visions and questioning to broader deliberation, dialogue, engagement, and debate in an inclusive way

Act

Act Transparently and Responsibly

Use these processes to influence the direction and trajectory of the research and innovation process itself.

Adapted from EPSRC’s AREA framework

An illustration graphically depicting the programme. At the base, the SUM Values make up level 1. Next, level 2 is made up of the SSAFE-D Principles. At the top, the Processed-Based Governance (PBG) Framework makes up level 3.

The Process-Based Governance (PBG) Framework

This framework ensures end-to-end accountability and provides a template for documenting necessary governance actions.

The SSAFE-D Principles

These principles provide actionable goals that can be operationalised across the AI project life cycle.

The SUM Values

These values support, underwrite and motivate responsible AI projects and provide criteria for assessing their potential social and ethical impacts.

 

The Process-Based Governance (PBG) Framework

A visualisation of the PBG Framework, showing the Governance Actioned plotted around the AI/ML project lifecycle.

The purpose of the PBG Framework is to ensure that the entirety of the SSAFE-D Principles are successfully operationalised and documented across the AI project lifecycle. It is a template that provides a landscape view of where in the AI project workflow governance actions are to take place in order to integrate each of the SSAFE-D principles within AI project activities. It is accompanied by a PBG Log which provides documentation of:

  • Established governance actions across the project lifecycle.
  • Relevant team members and roles involved in each governance action.
  • Explicit timeframes for follow-up actions, reassessments, and continual monitoring.
  • Clear and well-defined protocols for logging activity and instituting mechanisms for end-toend audibility.