Project teams frequently engage in tasks pertaining to the technical safety and sustainability of their AI projects. In doing so, they need to ensure that their resultant models are reproducible, robust, interpretable, reliable, performant, and secure. The issue of AI safety is of paramount importance, because possible failures have the potential to produce harmful outcomes and undermine public trust. This work of building safe AI outputs is an ongoing process requiring reflexivity and foresight. To aid teams in this, the workbook introduces the core components of AI Safety (reliability, performance, robustness, and security), and helps teams develop anticipatory and reflective skills which are needed to responsibly apply these in practice. The workbook is divided into two sections, Key Concepts and Activities.
This section provides content for workshop participants and facilitators to engage with prior to attending each workshop. It covers the four safety objectives and provides case studies aimed to support a practical understanding of technical safety of AI systems. The section also provides best practices to put considerations of accuracy and performance, reliability, security, and robustness in operation at every stage of the AI project lifecycle.
This section contains instructions for group-based activities (each corresponding to a section in the Key Concepts). These activities are intended to increase understanding of Key Concepts by using them.
Case studies within the AI Ethics and Governance in Practice workbook series are grounded in public sector use cases, but do not reference specific AI projects.