SHAP Analysis Reveals Redundancy in Unsupervised Anomaly Detectors
A new methodology uses SHapley Additive Explanations (SHAP) to characterize unsupervised anomaly detectors by their decision mechanisms. The approach quantifies how each model attributes importance to input features, creating attribution profiles that measure similarity between detectors. This addresses the challenge that many detectors rely on similar decision cues, producing redundant anomaly scores that limit ensemble learning's potential. By identifying models that capture different types of irregularities, the method aims to build genuinely complementary ensembles. The research is published on arXiv (2602.00208) and targets the problem of diverse data distributions and lack of labels in unsupervised anomaly detection.
Key facts
- Methodology uses SHAP to characterize anomaly detectors
- Quantifies feature importance attribution per model
- Measures similarity between detectors via attribution profiles
- Addresses redundancy in ensemble anomaly detection
- Published on arXiv with ID 2602.00208
- Focuses on unsupervised anomaly detection challenges
- Aims to identify complementary detectors for ensembles
Entities
Institutions
- arXiv