Dimensionality Determines When Modularity Helps Continual Learning
A new study on arXiv (2604.27656) investigates how network architecture, task similarity, and representational dimensionality jointly affect the stability-plasticity dilemma in continual learning. Researchers compared a task-partitioned modular recurrent network with a single-module baseline, systematically varying task similarity (low, medium, high) and weight initialization scale. The study found that the benefits of modularity depend critically on representational dimensionality: in low-dimensional settings, modular structure reduces interference, while in high-dimensional settings, shared representations are more effective. The work provides empirical characterization of learning regimes and offers insights into when structural separation improves or hinders knowledge transfer across sequential tasks.
Key facts
- Study examines how network architecture, task similarity, and representational dimensionality shape continual learning
- Compares task-partitioned modular recurrent network with single-module baseline
- Systematically varies task similarity (low, medium, high) and weight initialization scale
- Modularity benefits depend on representational dimensionality
- Low-dimensional settings: modular structure reduces interference
- High-dimensional settings: shared representations more effective
- Empirically characterizes different learning regimes
- Published on arXiv with ID 2604.27656
Entities
Institutions
- arXiv