ARTFEED — Contemporary Art Intelligence

CGM-JEPA: Self-Supervised Pretraining for Continuous Glucose Monitoring

ai-technology · 2026-05-06

The CGM-JEPA framework, a novel self-supervised pretraining approach, tackles the challenges of representation learning in Continuous Glucose Monitoring (CGM). Instead of focusing on raw values, this method predicts masked latent representations, facilitating abstraction across various modalities, including CGM time series, venous OGTT, and Glucodensity summaries. An enhanced version, X-CGM-JEPA, introduces a masked Glucodensity cross-view objective to gather complementary distributional insights. The models undergo pretraining on roughly 389,000 unlabeled samples. This strategy aims to identify early metabolic subphenotypes, such as insulin resistance and β-cell dysfunction, addressing the inconsistencies found in traditional baseline methods when transitioning between modalities or settings.

Key facts

  • CGM-JEPA is a self-supervised pretraining framework for Continuous Glucose Monitoring.
  • It predicts masked latent representations instead of raw values.
  • X-CGM-JEPA adds a masked Glucodensity cross-view objective.
  • Pretrained on approximately 389,000 unlabeled samples.
  • Targets detection of insulin resistance and β-cell dysfunction.
  • Addresses representation transfer across modalities like CGM, OGTT, and Glucodensity.
  • Aims to improve consistency across deployment shifts.
  • Published on arXiv with ID 2605.00933.

Entities

Institutions

  • arXiv

Sources