Skip to main content

Governing the use of data for AI

We want to involve patients and the public in deciding how and why access to health data should be granted for AI purposes, and are working closely with the AI Imaging team on these projects.

These web pages refer to a past AI programme that has been completed. 

Articles, case studies and resources on this workspace may reference work undertaken with a specific supplier to deliver the described services. However, other suppliers may offer similar solutions. NHS England does not endorse any particular supplier, and organisations should adhere to their procurement policies and commercial processes when selecting suppliers to ensure compliance and value for money


Honing approaches to data stewardship

We have partnered with Sciencewise (UKRI) to hold a public dialogue that will inform which model(s) of data stewardship the AI Ethics Initiative should invest in developing and refining through further research, with reference to national medical imaging assets.

Data stewardship describes practices relating to the collection, management and use of data. There is a growing debate about what a ‘responsible’ approach to data stewardship entails, with some advocating for a more participatory approach. The AI Ethics Initiative is seeking to ensure that the data stewardship model used for national (medical imaging) assets inspires confidence among patients, the public and key stakeholders. The central question we will seek to explore is how access to data for AI purposes should be granted.

The participants in the dialogue will inform the Terms of Reference for a research competition (a ‘Participatory Fund for Patient-Driven AI Ethics Research’) that we will hold to improve data stewardship approaches for national medical imaging assets established by the NHS AI Lab and and more broadly across the NHS.

There is an Oversight Group in place to provide advice on the dialogue process and materials. We are grateful to the following individuals for their time and invaluable input as members of this group:

Oversight group members

Natalie Banner (Chair), Genomics England

Kira Allmann, Ada Lovelace Institute

Phil Booth, medConfidential

Sophie Brannan, British Medical Association

Margaret Charleroy, Centre for Improving Data Collaborations, NHS Transformation Directorate

Vicky Chico, Office of the National Data Guardian

Mark Halling-Brown, Royal Surrey County Hospital

Ruth Keeling, Data Policy, NHS Transformation Directorate

Jasmine Leonard, Freelance

Sinduja Manohar, HDRUK

Joseph Savirimuthu, University of Liverpool

Laurence Thorne, Data Policy, NHS Transformation Directorate

Susheel Varma, ICO

Joseph Watts, Data Analytics, NHS Transformation Directorate


Read the findings from the public dialogue in the final report from Ipsos, Imperial College Health Partners and the Open Data Institute. 


Improving how decisions about data access are made

We have partnered with the Ada Lovelace Institute to design a model for an Algorithmic Impact Assessment (AIA), which is a tool that enables users to assess the possible societal impacts of an algorithmic system before it is used.

The AIA is being trialled as part of the data access process for national medical imaging assets, such as the National Covid-19 Chest Imaging Database. It will entail researchers and developers engaging with patients and the public about the risks and benefits of their proposed AI solutions, prior to gaining access to medical imaging data for training or testing. The AIA thus helps address the question of why access to data for AI purposes should be granted.

Through the trial, we hope to demonstrate the value of involving patients and the public earlier in the development process, when there is greater flexibility to make adjustments and address possible concerns about AI systems.

Read the initial report, proposed user guide and AIA template. 

Last edited: 27 February 2026 11:43 am