ARTFEED — Contemporary Art Intelligence

Harmonic Loss Extended with Non-Euclidean Distance Metrics

other · 2026-04-30

Researchers have extended the harmonic loss function for deep neural networks by replacing Euclidean distance with a range of alternative distance metrics. The original harmonic loss, grounded in Euclidean geometry, improves interpretability over cross-entropy loss and mitigates grokking—delayed generalization on test sets. However, prior work only explored Euclidean distance and lacked systematic evaluation of computational efficiency or sustainability. The new study investigates distance-tailored harmonic losses across vision backbones and large language models, framing analysis around model performance and interpretability. The work is published on arXiv under ID 2603.10225.

Key facts

  • Cross-entropy loss is standard but has interpretability limitations and unbounded weight growth.
  • Harmonic loss is a distance-based alternative that improves interpretability and mitigates grokking.
  • Previous harmonic loss research only used Euclidean distance.
  • New study extends harmonic loss with multiple non-Euclidean distance metrics.
  • Evaluation covers vision backbones and large language models.
  • Analysis includes model performance and interpretability.
  • Paper is on arXiv with ID 2603.10225.
  • No systematic evaluation of computational efficiency or sustainability was done before.

Entities

Institutions

  • arXiv

Sources