ARTFEED — Contemporary Art Intelligence

PhySE Framework Analyzes AR-LLM Social Engineering Threats

ai-technology · 2026-04-29

A recent study presents PhySE, a psychological framework aimed at assessing and mitigating AR-LLM-based Social Engineering (AR-LLM-SE) attacks. These attacks, represented by the SEAR system, involve malicious individuals utilizing Augmented Reality (AR) glasses to collect a target's visual and auditory information. This data is then analyzed by a Large Language Model (LLM) to recognize the person and create an extensive social profile. Following this, LLM-driven agents use social engineering techniques, offering real-time dialogue prompts to build rapport and carry out phishing or other harmful activities. The paper highlights two significant obstacles to implementing AR-LLM-SE effectively: cold-start personalization, which causes delays in early interactions, and static attack methods that depend on fixed stages. The framework seeks to overcome these issues by integrating psychological concepts to improve personalization and adaptability. The research is available on arXiv with the identifier 2604.23148.

Key facts

  • PhySE is a psychological framework for AR-LLM social engineering attacks.
  • AR-LLM-SE attacks use AR glasses to capture visual and vocal data.
  • An LLM analyzes data to identify individuals and generate social profiles.
  • LLM-powered agents provide real-time conversation suggestions for social engineering.
  • Two bottlenecks: cold-start personalization and static attack strategies.
  • Cold-start personalization suffers from delays in initial profile formation.
  • Static attack strategies rely on fixed-stage approaches.
  • The paper is published on arXiv with ID 2604.23148.

Entities

Institutions

  • arXiv

Sources