AI Outperforms Doctors in Diagnosing Real ER Patients
A study published April 30 in the journal Science found that OpenAI's o1 AI model outperformed two physicians in diagnosing real-world emergency room patients. The AI achieved exact or near-exact diagnoses 67% of the time, compared to 50% and 55% for the doctors. Researchers from Beth Israel Deaconess Medical Center and Harvard Medical School tested the model on 76 patients' records at three care stages. Despite the results, experts caution that AI will not replace clinicians due to the lack of moral reasoning and potential limitations with complex cases. Clinical trials are planned to integrate AI into patient care.
Key facts
- OpenAI's o1 AI model outperformed two doctors in diagnosing real ER patients.
- Study published April 30 in the journal Science.
- AI achieved 67% accuracy vs 50% and 55% for physicians.
- Tested on 76 Beth Israel patients' records at three care stages.
- Researchers from Beth Israel Deaconess Medical Center and Harvard Medical School.
- AI model is a preview version of OpenAI's o1.
- Experts caution AI lacks moral reasoning and may not perform well with larger datasets.
- Clinical trials are planned to integrate AI into patient care.
Entities
Institutions
- OpenAI
- Beth Israel Deaconess Medical Center
- Harvard Medical School
- New England Journal of Medicine
- Science
- NPR
- The Guardian
- Science News
- CBC
- Oak Valley Health
- Guardian
Locations
- Boston
- United States
- Canada
- Beth Israel