ARTFEED — Contemporary Art Intelligence

AcuityBench: New Benchmark Tests Language Models on Medical Urgency

other · 2026-05-13

AcuityBench has been developed by researchers as a benchmark to assess how effectively language models can determine the urgency of care based on user medical presentations. Unlike traditional health benchmarks that concentrate on specific triage tasks or medical question answering, AcuityBench offers a comprehensive evaluation by integrating five public datasets that encompass user conversations, clinical vignettes, online forum discussions, and messages from patient portals. Utilizing a standardized four-level acuity framework, it ranges from home monitoring to urgent emergency care. The benchmark includes 914 cases: 697 for standard accuracy assessment and 217 ambiguous cases confirmed by physicians for evaluating uncertainty. AcuityBench features two task formats: explicit four-way classification in a QA context and free-form conversational replies.

Key facts

  • AcuityBench is a benchmark for evaluating language models on medical urgency identification.
  • It harmonizes five public datasets under a four-level acuity framework.
  • The benchmark contains 914 cases total: 697 consensus and 217 ambiguous.
  • It supports explicit classification and free-form conversational response formats.
  • The four acuity levels range from home monitoring to immediate emergency care.
  • Ambiguous cases are physician-confirmed for uncertainty-aware evaluation.
  • Existing benchmarks do not offer unified acuity evaluation across settings.
  • AcuityBench addresses gaps in medical AI evaluation.

Entities

Institutions

  • arXiv

Sources