This module ntroduces fundamental concepts of artificial intelligence (AI), responsible research and innovation (RRI), and AI ethics and governance.
AI systems may have transformative and long-term effects on individuals and society. To manage these impacts responsibly and direct the development of AI systems toward optimal public benefit, considerations of AI ethics and governance must be a first priority.
In this workbook, we introduce and describe our PBG Framework, a multitiered governance model that enables project teams to integrate ethical values and practical principles into their innovation practices and to have clear mechanisms for demonstrating and documenting this.
Any computational or software-based system (or a combination of such systems) that uses methods derived from statistics, other mathematical techniques, or rule-based approaches to carry out tasks that are commonly associated with, or would otherwise require, human intelligence.
A model for reflecting on, anticipating, and deliberating about the ethical and social questions that arise in the development of AI systems. RRI provides methods for identifying and evaluating potential impacts of AI technologies and addressing challenges.
A popular approach to AI that uses training data to build algorithmic models which find patterns in and draw inferences from that data. When training is completed, ML models can then ingest new or unseen data to predict outcomes for particular instances.
AI ethics tackles the social and moral implications of the production and use of AI technologies. It explores the values, principles, and governance mechanisms needed to ensure the responsible and trustworthy design, development, deployment, and maintenance of AI systems.
AI projects are affected by the interconnected relation between AI technologies and the social environments in which their development and use is embedded. Both elements interact and influence each other. A sociotechnical approach treats AI systems as both social and technical constructs.
A tool for sense-checking and reflecting on the values, purposes, and interests that steer AI/ML projects, as well as projects’ real-world implications. This involves considering context, anticipating impacts, reflecting on purpose, engaging inclusively, and acting transparently and responsibly.
Think about the conditions and circumstances surrounding your AI project.
Describe and analyse the impacts, intended or not, that might arise from your project.
Reflect on the goals of and motivations for the project; Scrutinise perspectival limitations; Reflect on the power imbalances
Open up such visions and questioning to broader deliberation, dialogue, engagement, and debate in an inclusive way
Use these processes to influence the direction and trajectory of the research and innovation process itself.
Adapted from EPSRC’s AREA framework
The Process-Based Governance (PBG) Framework
This framework ensures end-to-end accountability and provides a template for documenting necessary governance actions.
The SSAFE-D Principles
These principles provide actionable goals that can be operationalised across the AI project life cycle.
The SUM Values
These values support, underwrite and motivate responsible AI projects and provide criteria for assessing their potential social and ethical impacts.
The purpose of the PBG Framework is to ensure that the entirety of the SSAFE-D Principles are successfully operationalised and documented across the AI project lifecycle. It is a template that provides a landscape view of where in the AI project workflow governance actions are to take place in order to integrate each of the SSAFE-D principles within AI project activities. It is accompanied by a PBG Log which provides documentation of: