ARTFEED — Contemporary Art Intelligence

Conditional Attribute Transformers for Sequence-Level Control

ai-technology · 2026-05-16

arXiv preprint 2605.14004 introduces Conditional Attribute Transformers, a method that extends autoregressive generative models to estimate sequence-level attributes without modifying the input or requiring expensive sampling. The approach jointly predicts the next token and the attribute value conditional on each token choice, enabling per-token credit assignment and counterfactual analysis in a single forward pass. This addresses limitations of standard next-token prediction, which overfits local patterns and underfits global structure. The method allows identifying which tokens in a sequence are associated with an attribute's value and performing counterfactual analysis. The work is relevant for applications needing attribute estimation or control in generated sequences.

Key facts

  • arXiv:2605.14004
  • Method: Conditional Attribute Transformers
  • Jointly estimates next-token probability and attribute value
  • Enables per-token credit assignment
  • Enables counterfactual analysis
  • Single forward pass, no input modification
  • Addresses overfitting of local patterns and underfitting of global structure
  • Published on arXiv

Entities

Institutions

  • arXiv

Sources