Mochi: Meta-Learning Framework for Graph Foundation Models
A team of researchers has introduced Mochi, a Graph Foundation Model that employs a meta-learning training framework aimed at enhancing task unification and training efficiency. Unlike earlier models that utilize reconstruction-based objectives like link prediction and require a separate unification phase with class prototypes, Mochi focuses on few-shot episodes that replicate the downstream evaluation process during pre-training. This approach aligns the training goals with inference, thus removing the necessity for post-hoc unification. Tests conducted on both synthetic and real-world datasets reveal limitations in traditional methods that hinder downstream performance. Mochi, along with its advanced variant Mochi++, demonstrates competitive or even superior results compared to existing Graph Foundation Models across 25 real-world datasets.
Key facts
- Mochi is a Graph Foundation Model using meta-learning.
- Pre-trains on few-shot episodes mirroring downstream evaluation.
- Aligns training objective with inference.
- Eliminates post-hoc unification step.
- Outperforms existing models on 25 real-world datasets.
- Has a more powerful variant Mochi++.
- Addresses limitations of reconstruction-based pre-training.
- Proposed by researchers and published on arXiv.
Entities
—