Humans Can Detect AI Text with 87.6% Accuracy, Study Finds
A recent study disputes earlier findings that claimed humans are unable to differentiate between text produced by AI and that created by humans. In an analysis involving 16 datasets across 9 languages and 9 domains, 19 annotators achieved an impressive average detection accuracy of 87.6%, far surpassing random chance. The research highlights significant differences in aspects such as concreteness, cultural subtleties, and diversity between human and machine-generated texts. By providing explicit explanations of these differences, it is possible to narrow the gaps in over 50% of instances. Notably, humans do not consistently favor human-generated content, particularly when its origin is ambiguous. The researchers have made their dataset, human labels, and annotator details publicly available.
Key facts
- Average human detection accuracy of AI text is 87.6% across 16 datasets.
- Study covers 9 languages and 9 domains.
- 19 annotators participated in the study.
- Key gaps between human and machine text: concreteness, cultural nuances, diversity.
- Explicit prompting bridges gaps in over 50% of cases.
- Humans do not always prefer human-written text.
- Dataset and human labels are released.
- Study challenges previous conclusions that human detection is no better than random.
Entities
—