ARTFEED — Contemporary Art Intelligence

Auto-Rubric as Reward: Explicit Multimodal Generative Criteria

ai-technology · 2026-05-12

A new framework called Auto-Rubric as Reward (ARR) is introduced to align multimodal generative models with human preferences. Unlike conventional RLHF methods that reduce preferences to scalar or pairwise labels, ARR externalizes a VLM's internalized preference knowledge into explicit, prompt-specific rubrics before any pairwise comparison. This converts implicit preferences into independently verifiable quality dimensions, addressing issues like reward hacking and opaque parametric proxies. The approach reframes reward modeling from implicit weight optimization to explicit criteria-based decomposition, aiming to generate reliable, scalable, and data-efficient rubrics. The paper is published on arXiv under ID 2605.08354.

Key facts

  • Auto-Rubric as Reward (ARR) is a new framework for aligning multimodal generative models with human preferences.
  • ARR reframes reward modeling from implicit weight optimization to explicit criteria-based decomposition.
  • It externalizes a VLM's internalized preference knowledge as prompt-specific rubrics before pairwise comparison.
  • The approach converts implicit preferences into independently verifiable quality dimensions.
  • It addresses vulnerabilities to reward hacking and opaque parametric proxies in RLHF.
  • The paper is available on arXiv with ID 2605.08354.
  • ARR aims to generate rubrics that are reliable, scalable, and data-efficient.
  • The framework contrasts with prior Rubrics-as-Reward (RaR) methods that struggle with generating such rubrics.

Entities

Institutions

  • arXiv

Sources