Publications
Find our publications below
Lexicography Saves Lives (LSL): Automatically Translating Suicide-Related Language
Lexicography Saves Lives (LSL): Automatically Translating Suicide-Related Language
Annika Marie Schoene, John E. Ortega, Rodolfo Joel Zevallos & Laura Haaber Ihle.
Abstract: Recent years have seen a marked increase in research that aims to identify or predict risk, intention or ideation of suicide. The majority of new tasks, datasets, language models and other resources focus on English and on suicide in the context of Western culture. However, suicide is global issue and reducing suicide rate by 2030 is one of the key goals of the UN’s Sustainable Development Goals. Previous work has used English dictionaries related to suicide to translate into different target languages due to lack of other available resources. Naturally, this leads to a variety of ethical tensions (e.g.: linguistic misrepresentation), where discourse around suicide is not present in a particular culture or country. In this work, we introduce the ‘Lexicography Saves Lives Project’ to address this issue and make three distinct contributions. First, we outline ethical consideration and provide overview guidelines to mitigate harm in developing suicide-related resources. Next, we translate an existing dictionary related to suicidal ideation into 200 different languages and conduct human evaluations on a subset of translated dictionaries. Finally, we introduce a public website to make our resources available and enable community participation.
Building Responsible AI for Mental Health: Insights from the First RAI4MH Workshop (White Paper)
Building Responsible AI for Mental Health: Insights from the First RAI4MH Workshop (White Paper)
Rafael Mestre, Annika Marie Schoene, Stuart E. Middleton & Agata Lapedriza.
Abstract: The Responsible AI for Mental Health (RAI4MH) workshop, held in London, gathered over 65 experts from diverse sectors to address the ethical and practical challenges of incorporating AI into mental health care. As mental health demands rise globally, AI is increasingly recognized for its potential to improve access, diagnostics, and support. However, ensuring responsible AI use requires robust ethical frameworks and transparent governance. The workshop included discussions on AI’s potential to enhance service accessibility and early intervention, while addressing concerns such as privacy, data security, and AI biases. Small group sessions generated preliminary policy recommendations, emphasizing infrastructure support, data security, workforce upskilling, and ethical standards for AI integration. Key recommendations include strengthening healthcare infrastructure, regular monitoring of AI’s long-term effects, and fostering public understanding through interdisciplinary collaboration and evidence sharing. These measures aim to balance innovation with patient protection, ensuring AI’s responsible and effective use in mental health care.
All Models are Wrong, But Some are Deadly: Inconsistencies in Emotion Detection in Suicide-related Tweets
All Models are Wrong, But Some are Deadly: Inconsistencies in Emotion Detection in Suicide-related Tweets
Annika Marie Schoene, Resmi Ramachandranpillai, Tomo Lazovich & Ricardo A. Baeza-Yates
Abstract: Recent work in psychology has shown that people who experience mental health challenges are more likely to express their thoughts, emotions, and feelings on social media than share it with a clinical professional. Distinguishing suicide-related content, such as suicide mentioned in a humorous context, from genuine expressions of suicidal ideation is essential to better understanding context and risk. In this paper, we give a first insight and analysis into the differences between emotion labels annotated by humans and labels predicted by three fine-tuned language models (LMs) for suicide-related content. We find that (i) there is little agreement between LMs and humans for emotion labels of suicide-related Tweets and (ii) individual LMs predict similar emotion labels for all suicide-related categories. Our findings lead us to question the credibility and usefulness of such methods in high-risk scenarios such as suicide ideation detection.
Automatically extracting social determinants of health for suicide: a narrative literature review
Automatically extracting social determinants of health for suicide: a narrative literature review
Annika Marie Schoene, Suzanne Garverich, Iman Ibrahim, Sia Shah, Benjamin Irving & Clifford C. Dacso
Abstract: Suicide is a complex phenomenon that is often not preceded by a diagnosed mental health condition, therefore making it difficult to study and mitigate. Artificial Intelligence has increasingly been used to better understand Social Determinants of Health factors that influence suicide outcomes. In this review we find that many studies use limited SDoH information and minority groups are often underrepresented, thereby omitting important factors that could influence risk of suicide.
AI for Defence: Readiness, Resilience and Mental Health
AI for Defence: Readiness, Resilience and Mental Health
Stuart E Middleton, Daniel Leightley, Patrick Hinton, Sarah Ashbridge, Daniel A Adler, Alec Banks, Maria Liakata, Brant Chee & Ana Basiri
Abstract: AI is a cross-cutting technology that is having a major impact on behavioural analysis in both the defence and mental health domains. Employing AI well may boost the readiness and resilience of military personnel. Stuart Middleton and his co-authors explore how AI is being used today in research and practice for mental health in the defence domain. They identify key current challenges, and signpost the important trends that may help to build bridges between these domains for the ultimate benefit of both.
Extracting and Summarizing Evidence of Suicidal Ideation in Social Media Contents Using Large Language Models
Extracting and Summarizing Evidence of Suicidal Ideation in Social Media Contents Using Large Language Models
Loitongbam Gyanendro Singh, Junyu Mao, Rudra Mutalik, Stuart E. Middleton
Abstract: This paper explores the use of Large Language Models (LLMs) in analyzing social media content for mental health monitoring, specifically focusing on detecting and summarizing evidence of suicidal ideation. We utilized LLMs Mixtral7bx8 and Tulu-2-DPO-70B, applying diverse prompting strategies for effective content extraction and summarization. Our methodology included detailed analysis through Few-shot and Zero-shot learning, evaluating the ability of Chain-of-Thought and Direct prompting strategies. The study achieved notable success in the CLPsych 2024 shared task (ranked top for the evidence extraction task and second for the summarization task), demonstrating the potential of LLMs in mental health interventions and setting a precedent for future research in digital mental health monitoring.
ConversationMoC: Encoding Conversational Dynamics using Multiplex Network for Identifying Moment of Change in Mood and Mental Health Classification
ConversationMoC: Encoding Conversational Dynamics using Multiplex Network for Identifying Moment of Change in Mood and Mental Health Classification
Loitongbam Gyanendro Singh1, Stuart E. Middleton, Tayyaba Azim, Elena Nichele, Pinyi Lyu and Santiago De Ossorno Garcia.
Abstract: Understanding mental health conversation dynamics is crucial, yet prior studies often overlooked the intricate interplay of social interactions. This paper introduces a unique conversation-level dataset and investigates the impact of conversational context in detecting Moments of Change (MoC) in individual emotions and classifying Mental Health (MH) topics in discourse. In this study, we differentiate between analyzing individual posts and studying entire conversations, using sequential and graph-based models to encode the complex conversation dynamics. Further, we incorporate emotion and sentiment dynamics with social interactions using a graph multiplex model driven by Graph Convolution Networks (GCN). Comparative evaluations consistently highlight the enhanced performance of the multiplex network, especially when combining reply, emotion, and sentiment network layers. This underscores the importance of understanding the intricate interplay between social interactions, emotional expressions, and sentiment patterns in conversations, especially within online mental health discussions. We are sharing our new dataset (ConversationMoC) and codes with the broader research community to facilitate further research.