AI assurance guide (BETA)
Welcome to the Centre for Data Ethics and Innovation’s (CDEI) AI Assurance Guide!
This is an interactive site we have built to help organisations understand how assurance techniques can be applied to AI systems, how to deliver AI assurance engagements, and the benefits of a mature ecosystem for assuring AI. This guide is aimed primarily at assurance practitioners, AI developers, auditors and policy makers.
This Guide is a companion to the CDEI’s AI assurance Roadmap which sets out the key steps required to build an effective and mature assurance ecosystem. The AI assurance roadmap aims to make a significant, early contribution to shaping and bringing coherence to this ecosystem. It sets out the roles that different groups will need to play, and the steps they will need to take, to move to a more mature ecosystem.
We want the guide to be a living document that we update regularly to reflect best practices and new developments in AI assurance. With this in mind, we welcome any comments, suggestions and case studies to include in the Guide at ai.assurance@cdei.gov.uk.
Who is this guide for?
This guide is primarily focused on understanding the assurance process and the delivery of AI assurance. This Guide examines the assurance process as applied to AI systems, different assurance mechanisms and sets out how they can be effectively applied in practice. The contents of this guide will be valuable for the following groups:
- Assurance practitioners: the guide highlights the tools and approaches required for AI assurance, the aspects of an AI system that need to be assured and how the spectrum of assurance tools can be applied to this task.
- AI developers or adopters: the guide sets out how assurance can help manage risk, build trust and ensure the systems you develop can work effectively, as well as advice to help demand effective and reliable assurance services.
- Regulators: the guide sets out the possible techniques and methods which could be used to build assurable requirements for AI.
- Policymakers: the guide sets out the key elements of AI assurance needed to build understanding and develop good policy in this area.
The structure of the Guide
The guide is structured in four sections: (1) The background to AI assurance (2) assurance engagements (3) the AI assurance toolkit (4) delivering AI assurance.
- Background to AI Assurance. This section provides an overview of the purpose and value of AI assurance, the different roles and responsibilities for AI assurance that are required in an AI assurance ecosystem. (This section will be most useful to those who are unfamiliar with AI assurance, if you are familiar with AI assurance, you may want to proceed straight to the next section).
- AI Assurance Engagements. This section looks at how assurance engagements build justified trust. We set out the key elements of an assurance engagement, demonstrate how these engagements build trust in other sectors through gathering communicable evidence of trustworthiness, and suggest the challenges with applying this model specifically to AI.
- The AI Assurance Toolkit. This section sets out the different mechanisms that can be used to assure AI systems and the organisations developing or deploying them. It interrogates why different assurance techniques are appropriate for assuring different elements of AI risk, in particular the need to judge both objectively measurable performance, and more uncertain, open-ended risks.
- Delivering AI Assurance. This section demonstrates how to deliver AI assurance projects. It matches assurance techniques to AI risks, sets out the role of independence in AI assurance, and proposes how responsibilities for assurance can fall on different parties within the assurance ecosystem. This section also includes a repository of AI assurance case studies, updated regularly to reflect the latest research.
All content is available under the Open Government License v3.0 except where otherwise stated.