Artificial intelligence guidance for health and care professionals
Artificial intelligence (AI) is the use of digital technology to create systems capable of performing tasks commonly thought to require human intelligence. This guidance focuses on the information governance implications of using AI in health and care settings, and should help support the lawful and safe use of data for AI innovations.
Data can lawfully be used to support AI developments. Your information governance (IG) lead, Data Protection Officer (DPO) and Caldicott Guardian should be involved in any decision to implement or share data to develop AI technology.
If you are using AI-based technology and you have any concerns or questions about the results, you should raise these within your organisation. For example, you may see false outputs or inconsistent results. Your concerns should usually be raised via your clinical management route. This is important not only from a clinical perspective, but also to ensure that data is being used fairly and appropriately. For example, irregular results may indicate bias or inaccuracy in the data which has been used to train the system.
Although AI-based technology is a useful tool to support you in your role, for example, for aiding clinical decision making, the final decision about the care that people receive should be made in consultation with the patient or service user, using your professional judgement.
People may have questions about how their information is used by AI products or processes. You should discuss any concerns with them or refer them to your IG lead, DPO or Caldicott Guardian. Your organisation’s privacy notice should also provide details about how information is being used and shared and the choices people have.
Last edited: 11 May 2026 2:14 pm