REPR-ALIGN: Aligning Autoregressive and Diffusion Language Models
A new method called REPR-ALIGN enables converting autoregressive language models into diffusion language models by preserving learned representation geometry, avoiding full retraining.
Key facts
- REPR-ALIGN is a representation alignment objective
- It adapts autoregressive LMs to diffusion LMs
- The method preserves internal representation geometry from next-token prediction
- It views DLM training as relearning decoding path, not language representations
- Published on arXiv with ID 2605.06885
- Announcement type: cross
Entities
Institutions
- arXiv