VERA-MH: A New Framework for Evaluating Chatbot Safety in Mental Health
Researchers have introduced VERA-MH (Validation of Ethical and Responsible AI in Mental Health), a clinically validated evaluation framework for assessing chatbot safety in mental health contexts. The first iteration focuses on suicidal ideation risks, testing how chatbots respond to users in crisis. VERA-MH involves three steps: conversation simulation using role-playing personas developed under clinical guidance, conversation judging by a support model, and model rating. The framework addresses the growing use of chatbots in mental health support, a field for which they were not originally designed.
Key facts
- VERA-MH stands for Validation of Ethical and Responsible AI in Mental Health.
- The framework is a novel clinically-validated evaluation for chatbot safety in mental health.
- The first iteration focuses on Suicidal Ideation (SI) risks.
- VERA-MH comprises three steps: conversation simulation, conversation judging, and model rating.
- User personas for simulation were developed under clinical guidance.
- Personas represent multiple risk factors, demographic characteristics, and disclosure factors.
- The judging step uses a support model to evaluate chatbot responses.
- The research addresses the increased use of chatbots in mental health support.
Entities
—