ARTFEED — Contemporary Art Intelligence

Real-Time Evaluation of Autonomous Systems under Adversarial Attacks

other · 2026-05-07

This study presents a new framework designed to assess autonomous driving strategies in adversarial scenarios by utilizing actual intersection data, overcoming the shortcomings of solely simulated evaluations. It trains and contrasts three trajectory-learning approaches: Behavior Cloning (BC) using MLP, BC based on object-tokenized Transformers, and inverse reinforcement learning (IRL) within a Generative Adversarial Imitation Learning (GAIL) setup. The models are assessed through Average Displacement Error (ADE) and Final Displacement Error (FDE). The research highlights that real-world data reveals structural inconsistencies, supervision limitations, and state-representation impacts that simulations fail to capture. This investigation is rooted in a controlled data contract, emphasizing robustness during inference.

Key facts

  • Framework uses real-world intersection driving data for adversarial robustness evaluation.
  • Compares MLP-based BC, Transformer-based BC, and GAIL-based IRL.
  • Evaluation metrics: ADE and FDE.
  • Real-world data captures structural inconsistencies, supervision constraints, and state-representation effects.
  • Controlled data contract used for data collection.
  • Focus on inference-time robustness.
  • Simulation fails to capture real-world complexities.
  • Paper available on arXiv with ID 2605.03491.

Entities

Institutions

  • arXiv

Sources