Starten Sie Ihre Suche...


Wir weisen darauf hin, dass wir technisch notwendige Cookies verwenden. Weitere Informationen

An Explainable Artificial Intelligence Text Classifier for Suicidality Prediction in Youth Crisis Text Line Users: Development and Validation Study

JMIR public health and surveillance. Bd. 11. Canada. 2025 e63809

Erscheinungsjahr: 2025

ISBN/ISSN: 2369-2960

Publikationstyp: Zeitschriftenaufsatz

Sprache: Englisch

Doi/URN: 10.2196/63809

Volltext über DOI/URN

Geprüft:Bibliothek

Inhaltszusammenfassung


BACKGROUND: Suicide represents a critical public health concern, and machine learning (ML) models offer the potential for identifying at-risk individuals. Recent studies using benchmark datasets and real-world social media data have demonstrated the capability of pretrained large language models in predicting suicidal ideation and behaviors (SIB) in speech and text. OBJECTIVE: This study aimed to (1) develop and implement ML methods for predicting SIBs in a real-world crisis helpline da...BACKGROUND: Suicide represents a critical public health concern, and machine learning (ML) models offer the potential for identifying at-risk individuals. Recent studies using benchmark datasets and real-world social media data have demonstrated the capability of pretrained large language models in predicting suicidal ideation and behaviors (SIB) in speech and text. OBJECTIVE: This study aimed to (1) develop and implement ML methods for predicting SIBs in a real-world crisis helpline dataset, using transformer-based pretrained models as a foundation; (2) evaluate, cross-validate, and benchmark the model against traditional text classification approaches; and (3) train an explainable model to highlight relevant risk-associated features. METHODS: We analyzed chat protocols from adolescents and young adults (aged 14-25 years) seeking assistance from a German crisis helpline. An ML model was developed using a transformer-based language model architecture with pretrained weights and long short-term memory layers. The model predicted suicidal ideation (SI) and advanced suicidal engagement (ASE), as indicated by composite Columbia-Suicide Severity Rating Scale scores. We compared model performance against a classical word-vector-based ML model. We subsequently computed discrimination, calibration, clinical utility, and explainability information using a Shapley Additive Explanations value-based post hoc estimation model. RESULTS: The dataset comprised 1348 help-seeking encounters (1011 for training and 337 for testing). The transformer-based classifier achieved a macroaveraged area under the curve (AUC) receiver operating characteristic (ROC) of 0.89 (95% CI 0.81-0.91) and an overall accuracy of 0.79 (95% CI 0.73-0.99). This performance surpassed the word-vector-based baseline model (AUC-ROC=0.77, 95% CI 0.64-0.90; accuracy=0.61, 95% CI 0.61-0.80). The transformer model demonstrated excellent prediction for nonsuicidal sessions (AUC-ROC=0.96, 95% CI 0.96-0.99) and good prediction for SI and ASE, with AUC-ROCs of 0.85 (95% CI 0.97-0.86) and 0.87 (95% CI 0.81-0.88), respectively. The Brier Skill Score indicated a 44% improvement in classification performance over the baseline model. The Shapley Additive Explanations model identified language features predictive of SIBs, including self-reference, negation, expressions of low self-esteem, and absolutist language. CONCLUSIONS: Neural networks using large language model-based transfer learning can accurately identify SI and ASE. The post hoc explainer model revealed language features associated with SI and ASE. Such models may potentially support clinical decision-making in suicide prevention services. Future research should explore multimodal input features and temporal aspects of suicide risk.» weiterlesen» einklappen

  • German
  • Shapley
  • adolescent
  • adolescents
  • chat protocols
  • crisis helpline
  • decision-making
  • deep learning
  • explainable artificial intelligence (XAI)
  • health informatics
  • help-seeking behaviors
  • language model
  • language models
  • large language model (LLM)
  • machine learning
  • mental health
  • mobile phone
  • neural network
  • prevention
  • public health
  • risk monitoring
  • self-harm
  • self-murder
  • suicidal ideation
  • suicidality
  • suicide
  • transformer model
  • youth

Autoren


Thomas, Julia (Autor)
Lucht, Antonia (Autor)
Segler, Jacob (Autor)
Wundrack, Richard (Autor)
Miché, Marcel (Autor)
Lieb, Roselind (Autor)
Kuchinke, Lars (Autor)

Verknüpfte Personen


Beteiligte Einrichtungen