New Research Demonstrates Remote Rowhammer Attacks via Federated Learning Clients
A recently discovered security vulnerability in Federated Learning (FL) systems allows remote Rowhammer attacks on central servers, removing the necessity for backdoor access. This was detailed in the arXiv preprint 2505.06335v2. It shows that attackers can manipulate certain FL clients to create fast, repetitive memory updates on the server. By using reinforcement learning (RL), an attacker can figure out how to execute these attacks by monitoring harmful behaviors in FL clients. This approach challenges the assumption that a large number of clients can effectively shield against server attacks. Most current FL security efforts focus on protecting data privacy at client sites or during communication, often overlooking server security. Federated Learning enables global collaboration among multiple agents, which is crucial for training advanced AI like Large Language Models (LLMs) with diverse datasets. The success of FL relies on limited gradient updates and remote memory access at the central server. This discovery marks the first time remote Rowhammer attacks can occur without direct server access, highlighting a serious security gap in distributed AI training systems.
Key facts
- Research demonstrates remote Rowhammer attacks via Federated Learning clients
- Attackers use reinforcement learning to exploit clients for server memory attacks
- Method requires no backdoor access to the central server
- Vulnerability documented in arXiv preprint 2505.06335v2
- Federated Learning enables global AI training across diverse data sources
- FL security traditionally focuses on client privacy and communication channels
- Attack exploits high-frequency repetitive memory updates on servers
- This represents first remote Rowhammer initiation without server access
Entities
—