Counterarguments in Writing Boost Critical Thinking, AI Judges Agree
A study from arXiv (2605.05353) tested whether writing counterarguments improves critical thinking in the age of Generative AI. Thirty-six students in a university course chose from four thesis statements on popular debates. Using six rubrics (focus, logic, content, style, correctness, reference), three human assessors—two peer reviewers and one experienced teacher—evaluated 35 submissions on a 5-point Likert scale. Six frontier LLMs also judged the same submissions using identical rubrics. The mixed-method design included qualitative feedback and quantitative analysis. Results showed that counterargument writing effectively fosters critical thinking, even when assessed by AI, addressing risks of cheating and cognitive offloading with GenAI.
Key facts
- Study published on arXiv with ID 2605.05353
- 36 students participated in the intervention
- 4 thesis statements from popular debates were used
- 6 established rubrics were applied: focus, logic, content, style, correctness, reference
- 3 human assessments per writeup: two peer reviews and one teacher
- 35 submissions were analyzed after disqualifying one for irregularity
- 6 frontier LLMs served as AI judges
- Mixed-method design included qualitative and quantitative analysis
- Counterarguments were found to enhance critical thinking
- Study addresses risks of cheating and cognitive offloading with GenAI
Entities
Institutions
- arXiv