ARTFEED — Contemporary Art Intelligence

QuickLAP: Bayesian Framework Fuses Physical and Language Feedback for Robot Learning

ai-technology · 2026-05-14

Researchers have unveiled QuickLAP (Quick Language-Action Preference learning), a Bayesian framework designed to merge physical corrections with natural language feedback for real-time reward function inference in semi-autonomous systems. The innovative concept views language as a probabilistic observation of user latent preferences, which helps identify significant reward features and interpret physical corrections accurately. QuickLAP leverages Large Language Models (LLMs) to derive reward feature attention masks and preference shifts from informal statements, combining these insights with physical feedback through a closed-form update rule. This facilitates swift, real-time, and reliable reward learning capable of managing ambiguous feedback. The framework was evaluated in a semi-autonomous driving simulator, showcasing its effectiveness. It resolves the limitations of each modality: physical corrections, though grounded, can be unclear in intent, while language articulates high-level objectives but lacks physical context.

Key facts

  • QuickLAP fuses physical and language feedback to infer reward functions in real time.
  • Language is treated as a probabilistic observation over user latent preferences.
  • LLMs extract reward feature attention masks and preference shifts from free-form utterances.
  • Physical feedback is integrated via a closed-form update rule.
  • Tested in a semi-autonomous driving simulator.
  • Handles ambiguous feedback robustly.
  • Physical corrections are grounded but ambiguous in intent.
  • Language expresses high-level goals but lacks physical grounding.

Entities

Institutions

  • arXiv

Sources