ARTFEED — Contemporary Art Intelligence

RbtAct: Using Rebuttals to Train AI for Actionable Peer Review

ai-technology · 2026-04-30

A novel method called RbtAct has been introduced by researchers, utilizing peer review rebuttals as implicit supervision to enhance the training of large language models (LLMs) for generating practical review feedback. This technique tackles the issue of AI-generated reviews often being shallow and lacking specific guidance. RbtAct presents a unique task: perspective-conditioned segment-level review feedback generation, where the model creates a focused comment based on the entire paper and a defined perspective. By analyzing rebuttals—showing which reviewer remarks prompted changes—the system aims to improve actionability. This research is available on arXiv (2603.09723) and seeks to elevate the standard of automated peer review.

Key facts

  • RbtAct uses rebuttals as supervision for actionable review generation.
  • LLMs are increasingly used to draft peer-review reports.
  • Many AI-generated reviews are superficial and not actionable.
  • The method targets perspective-conditioned segment-level review feedback generation.
  • Rebuttals show which comments led to concrete revisions or plans.
  • The approach directly optimizes a feedback generator for actionability.
  • The paper is available on arXiv with ID 2603.09723.
  • The work addresses a gap in automated peer review quality.

Entities

Institutions

  • arXiv

Sources