Privacy-Preserving Federated Framework Uses Tiny LLMs for Log Anomaly Detection
A new framework called DP-FLogTinyLLM enables collaborative log anomaly detection across multiple organizations without centralizing sensitive data. It addresses privacy and security constraints in real-world distributed systems by integrating federated optimization with differential privacy. The approach employs parameter-efficient large language models, specifically using low-rank adaptation (LoRA) to fine-tune Tiny LLMs at each client. This design ensures scalability in resource-constrained environments. Empirical testing utilized datasets from Thunderbird and BGL. Modern distributed systems generate massive log volumes critical for identifying anomalies and cyber threats. Existing methods, including recent LLM-based approaches, largely depend on centralized training, making them unsuitable for decentralized settings where logs cannot be shared. The proposed framework allows organizations to learn collaboratively while preserving data privacy.
Key facts
- DP-FLogTinyLLM is a privacy-preserving federated framework for log anomaly detection.
- It uses parameter-efficient large language models (LLMs).
- The framework integrates federated optimization with differential privacy.
- Low-rank adaptation (LoRA) is employed for efficient fine-tuning of Tiny LLMs at each client.
- It is designed for scalability in resource-constrained environments.
- Empirical results are based on the Thunderbird and BGL datasets.
- Modern distributed systems generate massive volumes of log data critical for detecting anomalies and cyber threats.
- Existing log anomaly detection methods largely rely on centralized training and are not suitable for environments where logs cannot be centralized due to privacy and security constraints.
Entities
—