LLMs for Coding Healthcare Simulation Dialogue
A new arXiv preprint (2604.23255) explores using large language models to automate the coding of dialogue in healthcare simulation, aiming to balance coding performance, processing time, and environmental impact. The study builds on research showing dialogue is central to team learning and that LLMs can approximate human coding via few-shot prompting. However, prior work focused on accuracy for research; this study addresses the educational need for fast, accurate labeling of team dialogue. The paper proposes prompt designs to enable real-time feedback in simulation-based training, reducing reliance on labor-intensive qualitative coding.
Key facts
- arXiv preprint 2604.23255
- Focus on healthcare simulation dialogue
- Uses LLMs for automated coding
- Balances performance, time, and environmental impact
- Dialogue is central to team learning
- Few-shot prompting approximates human coding
- Aims for real-time feedback in training
- Reduces labor-intensive qualitative coding
Entities
Institutions
- arXiv