Policy
Find our policy outputs below
Contribution to POSTnote “AI and Mental Healthcare – ethical and regulatory considerations”
Contribution to POSTnote “AI and Mental Healthcare – ethical and regulatory considerations”
In recent years the number of AI tools available for mental healthcare and wellbeing purposes has increased. This builds from a burgeoning digital health sector in which 20,000+ wellbeing apps are reportedly available on the app store. These apps are distinct both from AI tools which have been purpose-built for NHS use, and from unintended uses of companion chatbot apps – which were never intended for mental health purposes. All the cases of severe harm identified through this research were from unintended uses of general companion chatbot apps. But there are ethical considerations around the use of all AI tools in mental healthcare.
Public sector responses are underway to improve data availability and support improvement in evidence generation and deployment. There are also collaborative responses underway to address the ethical challenges from multiple government agencies in the UK and globally. This builds on considerable existing regulation and guidance (examples are outlined in the POSTnote).
Contribution to POSTnote “AI and Mental Healthcare – opportunities and delivery considerations”
Contribution to POSTnote “AI and Mental Healthcare – opportunities and delivery considerations”
Given the increasing demand for mental healthcare and capacity challenges within mental healthcare, many purpose-built solutions are being trialed by NHS Trusts and beyond. Many examples of AI tools are listed within the POSTnote. Currently much of the deployment has been to supplement delivery of therapy, such as alleviating administrative burdens. However, there is some debate on whether more autonomous solutions could work for some service users.
Research suggests that purpose-built AI solutions can be effective in reducing specific symptoms of some mental health conditions such as anxiety or depression, tracking relapse risks (such as for psychosis), and inciting preventative behaviour changes. However, contributors and systematic reviews emphasised that longer-term and larger-scale studies are needed to better identify what works for whom. The need for more evaluation of cost and efficiency-saving claims was also highlighted.
An area of particular interest to many stakeholders is precision psychiatry. These techniques harness the availability of multiple data sources, such as brain imaging, DNA or blood samples, and passive data collection from mobile phones (among many others). These techniques aim to make diagnosis, treatment and prediction of risks more precise, and large-scale trials are underway. To support implementation of all this, investment, strategy, upskilling, co-design and public trust building are needed.
Building Responsible AI for Mental Health: Insights from the First RAI4MH Workshop (White Paper)
Building Responsible AI for Mental Health: Insights from the First RAI4MH Workshop (White Paper)
Rafael Mestre, Annika Marie Schoene, Stuart E. Middleton & Agata Lapedriza.
Abstract: The Responsible AI for Mental Health (RAI4MH) workshop, held in London, gathered over 65 experts from diverse sectors to address the ethical and practical challenges of incorporating AI into mental health care. As mental health demands rise globally, AI is increasingly recognized for its potential to improve access, diagnostics, and support. However, ensuring responsible AI use requires robust ethical frameworks and transparent governance. The workshop included discussions on AI’s potential to enhance service accessibility and early intervention, while addressing concerns such as privacy, data security, and AI biases. Small group sessions generated preliminary policy recommendations, emphasizing infrastructure support, data security, workforce upskilling, and ethical standards for AI integration. Key recommendations include strengthening healthcare infrastructure, regular monitoring of AI’s long-term effects, and fostering public understanding through interdisciplinary collaboration and evidence sharing. These measures aim to balance innovation with patient protection, ensuring AI’s responsible and effective use in mental health care.