ARTFEED — Contemporary Art Intelligence

HLV as Selbstzweck: Preserving Human Pluralism in NLP Post-Training

publication · 2026-04-24

A recent position paper emphasizes the importance of Human Label Variation (HLV)—the valid differences in annotation that showcase varied human viewpoints—as a fundamental aspect (Selbstzweck) in natural language processing (NLP). This is particularly relevant in the context of large language models (LLMs) and post-training techniques such as human feedback alignment. Previously dismissed as mere noise, HLV is now recognized as a valuable signal for enhancing model robustness. Nonetheless, existing preference-learning datasets often merge various annotations into a single label, erasing diverse viewpoints in favor of a false consensus. The authors argue that maintaining HLV is crucial for both pluralistic alignment and the evaluation of sociotechnical safety, which involves assessing model behavior in relation to human interactions and societal contexts. The paper can be found on arXiv with the identifier 2510.12817.

Key facts

  • HLV refers to legitimate disagreement in annotation reflecting human perspective diversity.
  • HLV was long treated as noise in NLP but is now seen as a signal for model robustness.
  • The paper focuses on the era of LLMs and post-training methods like human feedback-based alignment.
  • Current preference-learning datasets collapse multiple annotations into a single label.
  • Preserving HLV is argued as necessary for pluralistic alignment and sociotechnical safety evaluation.
  • The paper calls for treating HLV as a Selbstzweck (intrinsic value).
  • The paper is a position paper published on arXiv.
  • arXiv identifier: 2510.12817.

Entities

Institutions

  • arXiv

Sources