Striving for health equity
We want to ensure that AI leads to improvements in health outcomes for minority populations.
These web pages refer to a past AI programme that has been completed.
Articles, case studies and resources on this workspace may reference work undertaken with a specific supplier to deliver the described services. However, other suppliers may offer similar solutions. NHS England does not endorse any particular supplier, and organisations should adhere to their procurement policies and commercial processes when selecting suppliers to ensure compliance and value for money
We have partnered with the Health Foundation to support research in response to concerns about algorithmic bias. A research competition, enabled by the National Institute for Health and Research (NIHR), was held to address the racialised impact of algorithms in health and care and explore opportunities to improve health outcomes in minority ethnic groups.
While algorithmic bias does not only affect racialised communities, examples of deploying AI in the US indicate that there is a particular risk of algorithmic bias worsening outcomes for minority ethnic patients. At the same time, there has been limited exploration of whether and how AI can be applied to address racial and ethnic disparities in health and care.
There were two categories of this research competition.
Understanding and enabling opportunities to use AI to address health inequalities
The focus of this first category is on how to encourage approaches to innovation that are informed by the health needs of underserved minority ethnic communities and/or are bottom-up in nature.
Optimising datasets, and improving AI development, testing, and deployment
The focus of this second category is on creating the conditions to facilitate the adoption of AI that serves the health needs of minority ethnic communities. For example, this may include mitigating the risks of perpetuating and entrenching racial health inequalities through data collection and selection and during the development, testing, and deployment stages.
The following 4 projects were awarded 2-year funding in October 2021.
Assessing the acceptability, utilisation and disclosure of health Information to an automated chatbot for advice about sexually transmitted infections in minoritised ethnic populations
Dr Tom Nadarzynski at the University of Westminster
This project aims to raise the uptake of screening for STIs/HIV among minority ethnic communities through an automated AI-driven chatbot which provides advice about sexually transmitted infections. The research will also inform the development and implementation of chatbots designed for minority ethnic populations within the NHS and more widely in public health.
I-SIRch - Using artificial intelligence to improve the investigation of factors contributing to adverse maternity incidents involving Black mothers and families
Dr Patrick Waterson and Dr Georgina Cosma at Loughborough University
This project uses AI to investigate factors contributing to adverse maternity incidents amongst mothers from different ethnic groups. This research will provide a way of understanding how a range of causal factors combine, interact and lead to maternal harm. The aim is to inform the design of interventions that are targeted and more effective for these groups.
Ethnic differences in performance and perceptions of AI retinal image analysis systems (ARIAS) for the detection of diabetic retinopathy in the NHS Diabetic Screening Programme
Professor Alicja Rudnicka (St. George's Hospital) and Professor Adnan Tufail (Moorfields Eye Hospital and Institute of Ophthalmology, UCL). Co-investigators: The Homerton University Hospital, Kingston University, and University of Washington, USA.
This project aims to ensure that AI technologies that detect diabetic retinopathy work for all, by validating the performance of AI retinal image analysis systems that will be used in the NHS Diabetic Eye Screening Programme (DESP) in different subgroups of the population. This study will provide evidence of effectiveness and safety prior to potential commissioning and deployment within the NHS.
STANDING together (STANdards for Data INclusivity and Generalisability)
Dr. Xiaoxuan Liu and Professor Alastair Denniston at University Hospitals Birmingham NHS Foundation Trust
University Hospitals Birmingham NHS Foundation Trust and partners will lead STANDING Together, an international consensus process to produce standards for datasets underpinning AI systems, to ensure they are diverse, inclusive and can support the development of AI systems which work across all demographic groups. The resulting standards will help inform regulators, commissioners, policy-makers and health data institutions on whether AI systems are underpinned by datasets which represent everyone and don’t risk leaving underrepresented and minority groups behind.
The project is currently holding a public consultation on their draft recommendations on ensuring datasets are diverse and inclusive. The consultation finished on 26 May and feedback can be given on the STANDING together website.
Last edited: 27 February 2026 11:44 am