Explainable LLM Dialogue System for Diagnosing Student Problem Behaviors
A team of researchers has created an explainable dialogue system utilizing a fine-tuned large language model (LLM) aimed at helping educators identify problematic behaviors in students. This system employs a hierarchical attribution technique from explainable AI (xAI) to pinpoint dialogue evidence for each suggestion and produce explanations in natural language. In technical assessments, this approach surpassed baseline methods in evidence identification. A preliminary study involving 22 pre-service teachers indicated that those who received explanations expressed greater trust in the system. These results point to a positive avenue for enhancing transparency and trust in AI-driven educational diagnostics.
Key facts
- The system is built on a fine-tuned LLM.
- It uses a hierarchical attribution method based on xAI.
- The method outperformed baseline approaches in identifying supporting evidence.
- A user study with 22 pre-service teachers was conducted.
- Participants who received explanations reported higher trust.
- The system supports multi-turn dialogue.
- It generates natural-language explanations for recommendations.
- The goal is to improve transparency and trust in AI-assisted diagnostics.
Entities
—