Feedback Alignment Fails to Scale to Convolutional Networks
A recent study investigates five learning algorithms, which include modified feedback alignment (FA) and traditional backpropagation (BP), utilizing the CIFAR-10 dataset on the same convolutional architecture. This research presents a three-part comparative analysis that emphasizes biological plausibility, interpretability, and computational complexity. Findings reveal that modified FA algorithms achieve internal representations that closely resemble those generated by BP, implying that their effectiveness may arise from replicating BP's representational geometry. Nonetheless, FA struggles to adapt to convolutional architectures unless alterations are made that undermine biological plausibility. The paper can be accessed on arXiv with the identifier 2605.08564.
Key facts
- Feedback alignment fails to scale to convolutional architectures.
- Five learning algorithms evaluated including modified FA and standard BP.
- CIFAR-10 dataset used for training.
- Analysis covers biological plausibility, interpretability, and computational complexity.
- Modified FA algorithms converge on representations similar to backpropagation.
- Functional success of modified FA may be due to mimicking BP's representational geometry.
- Modifications to FA come at questionable cost to biological plausibility.
- Paper available on arXiv: 2605.08564.
Entities
Institutions
- arXiv