ARTFEED — Contemporary Art Intelligence

DistPFN: Test-Time Posterior Adjustment for TabPFN Label Shift

ai-technology · 2026-05-07

TabPFN, a foundational model designed for tabular datasets that employs in-context learning on synthetic data, faces challenges with label shift, frequently overfitting to the dominant class. To combat this issue, researchers have introduced DistPFN, marking the first test-time posterior adjustment technique for tabular foundation models. This method modifies the predicted class probabilities by reducing the weight of the training prior and accentuating the model's predicted posterior, without necessitating any changes to the architecture or additional training. An extension known as DistPFN-T adds temperature scaling to dynamically manage the strength of adjustments based on discrepancies between prior and posterior. Tested on more than 250 OpenML datasets, DistPFN demonstrates significant enhancements compared to TabPFN when label shift occurs.

Key facts

  • TabPFN is a foundation model for tabular datasets using in-context learning on synthetic data.
  • TabPFN is vulnerable to label shift, overfitting to the majority class.
  • DistPFN is the first test-time posterior adjustment method for tabular foundation models.
  • DistPFN rescales class probabilities by downweighting the training prior and emphasizing the predicted posterior.
  • DistPFN requires no architectural modification or additional training.
  • DistPFN-T incorporates temperature scaling for adaptive adjustment strength.
  • Evaluation was conducted on over 250 OpenML datasets.
  • DistPFN shows substantial improvements over TabPFN under label shift.

Entities

Institutions

  • arXiv
  • OpenML

Sources