LLM Unlearning Defense Against AR-LLM Social Engineering Attacks
A recent study published on arXiv (2604.23141) introduces UNSEEN, a defense mechanism for unlearning in cross-stack LLMs aimed at countering social engineering attacks that utilize AR-LLMs. These attacks, illustrated by SEAR, involve augmented reality glasses that capture a target's image and voice, subsequently employing LLMs to create a social profile and suggest conversational strategies for building trust and executing phishing attempts. The authors contend that existing defenses, such as role-based access control and data flow monitoring, are inadequate for the integrated AR-LLM landscape due to the presence of embedded AR devices and the opaque nature of LLM inference. They advocate for a transition from human-centric approaches, like legislation and user education, to enforceable vendor policies and protections at the platform level.
Key facts
- Paper arXiv:2604.23141 proposes UNSEEN defense.
- UNSEEN is a cross-stack LLM unlearning defense.
- Targets AR-LLM-based social engineering attacks like SEAR.
- Attacker uses AR glasses to capture image and vocal information.
- LLM identifies target and generates social profile.
- LLM agents apply social engineering strategies for conversation suggestion.
- Current defenses like role-based access control are not applicable.
- Paper advocates for enforceable vendor policies over human-centric measures.
Entities
Institutions
- arXiv