ARTFEED — Contemporary Art Intelligence

AFFormer: Transformer-Based V2X Cooperative Perception Under Channel Impairments

other · 2026-05-06

A new framework called AFFormer has been introduced by researchers, designed to improve V2X cooperative perception and enhance the robustness of 3D object detection amidst channel challenges such as noise, fading, and interference. This framework comprises three essential components: Multi-Agent and Temporal Aggregation, which facilitates context-aware integration across different agents and time; Dual Spatial Attention, aimed at effective modeling of spatial dependencies; and Uncertainty-Guided Fusion, which addresses the issue of corrupted features. The objective of this research is to bolster the reliability of intelligent transportation systems in real-world communication scenarios.

Key facts

  • AFFormer is an Adaptive Feature Fusion Transformer for V2X cooperative perception.
  • It addresses channel impairments such as noise, fading, and interference.
  • Three key modules: Multi-Agent and Temporal Aggregation, Dual Spatial Attention, Uncertainty-Guided Fusion.
  • The framework improves robustness of 3D object detection for autonomous vehicles.
  • Published on arXiv with ID 2605.01888.

Entities

Institutions

  • arXiv

Sources