Module Contents

Module Contents

Introduction to this Module

An image of a care worker visiting a family.

The purpose of this workbook is to introduce participants to the principle of AI Explainability.  Understanding how, why, and when explanations of AI-supported or -generated outcomes need to be provided, and what impacted people’s expectations are about what these explanations should include, is crucial to fostering responsible and ethical practices within your AI projects. To guide you through this process, we will address essential questions: What do we need to explain? And who do we need to explain this to? This workbook offers practical insights and tools to facilitate your exploration of AI Explainability. By providing actionable approaches, we aim to equip you and your team with the means to identify when and how to employ various types of explanations effectively. This workbook is divided into two sections, Key Concepts and Activities.

KEY CONCEPTS

This section provides content for workshop participants and facilitators to engage with prior to attending each workshop. It first provides definitions of key terms, introduces the maxims of AI Explainability and considerations for building appropriately explainable AI systems, and gives an overview of the main types of explanations. The section then delves into practical tasks and tools to ensure AI Explainability.

ACTIVITIES

This section contains instructions for group-based activities (each corresponding to a section in the Key Concepts). These activities are intended to increase understanding of Key Concepts by using them.

Case studies within the AI Ethics and Governance in Practice workbook series are grounded in public sector use cases, but do not reference specific AI projects.

 

Print this module

Cover of module summary
Download a summary of the module for offline use and printing
Cover of the workbook
Download the full module - with copies for both facilitators and participants