LLM-as-Reviewer Boosts Code Quality in Student Projects
A study on integrating an LLM-based code reviewer into GitHub pull requests for capstone software engineering projects found improved code quality and self-regulated learning. Over two cohorts (2023–2024, 100+ students), the tool was used by 93% of teams in 2024 (vs. 50% in 2023), with iterative activity nearly doubling (1,176 vs. 581 PRs). Technical failures dropped from 227 to zero after refinements. Responsiveness—commits following AI reviews—remained stable at ~32%. The mixed-methods analysis used GitHub data, reflective reports, and surveys.
Key facts
- LLM-as-reviewer integrated into GitHub pull requests
- Two cohorts: 2023 and 2024, over 100 students
- 2024 cohort produced 1176 PRs vs. 581 in 2023
- Failed AI attempts dropped from 227 to zero
- Adoption: 93% of teams in 2024 vs. 50% in 2023
- Responsiveness: 32% (2023) and 33% (2024)
- Mixed-methods design: GitHub data, reflective reports, survey
- Human-in-the-loop approach used
Entities
Institutions
- arXiv