TFM-Retouche: Input-Space Adapter for Tabular Foundation Models
Tabular foundation models (TFMs) such as TabPFN-2.6, TabICLv2, ConTextTab, Mitra, LimiX, and TabDPT demonstrate impressive zero-shot capabilities through in-context learning, yet they possess fixed inductive biases during inference. Typically, adapting a pretrained TFM to a particular dataset necessitates either costly full fine-tuning or parameter-efficient techniques like LoRA, which require customization for each TFM's internal structure. The impact of weight-space fine-tuning on accuracy and calibration remains inconclusive. To address this, researchers have developed TFM-Retouche, a lightweight input-space residual adapter that is agnostic to the architecture of the frozen TFM backbone. This adapter efficiently learns a minor residual correction in the input space to better align the input data with the model's inductive biases.
Key facts
- TFM-Retouche is a lightweight input-space residual adapter for tabular foundation models.
- It is architecture-agnostic and designed for frozen TFM backbones.
- It learns a small residual correction in the input space.
- Existing adaptation methods include full fine-tuning (expensive) and PEFT like LoRA (architecture-specific).
- Evidence on weight-space fine-tuning benefits is mixed.
- TFMs mentioned: TabPFN-2.6, TabICLv2, ConTextTab, Mitra, LimiX, TabDPT.
- The adapter aligns input data with pretrained model inductive biases.
- The approach aims to be efficient and general.
Entities
—