ARTFEED — Contemporary Art Intelligence

HEPA: Self-Supervised Transformer for Rare Event Prediction in Time Series

ai-technology · 2026-05-13

A novel machine learning framework known as HEPA (Horizon-conditioned Event Predictive Architecture) has been developed to forecast critical yet infrequent events in multivariate time series, including turbine malfunctions, cardiac arrhythmias, water pollution, cyber threats, and volatility regimes. To tackle the challenge of limited labeled data, the model employs a self-supervised methodology: a causal Transformer encoder is pretrained using a Joint-Embedding Predictive Architecture (JEPA), where a horizon-conditioned predictor is trained to anticipate future representations instead of direct future values. This approach compels the encoder to learn predictable temporal patterns from unlabeled data. Following pretraining, the encoder remains static while the predictor is fine-tuned for the target event, yielding a monotonic survival cumulative distribution function (CDF) over various horizons. HEPA maintains consistent architecture and optimizer hyperparameters across ten event types and is documented on arXiv with the identifier 2605.11130.

Key facts

  • HEPA stands for Horizon-conditioned Event Predictive Architecture.
  • It uses a causal Transformer encoder pretrained via Joint-Embedding Predictive Architecture (JEPA).
  • The predictor learns to forecast future representations, not future values.
  • After pretraining, the encoder is frozen and only the predictor is finetuned.
  • The model outputs a monotonic survival cumulative distribution function (CDF) over horizons.
  • It handles ten event types including water contamination, cyberattack detection, and volatility regimes.
  • Fixed architecture and optimizer hyperparameters are used across all benchmarks.
  • The paper is available on arXiv with ID 2605.11130.

Entities

Institutions

  • arXiv

Sources