Layer Dropping Method Speeds Up CNN Training
A new method called Learn&Drop improves the training efficiency of deep convolutional neural networks by reducing forward propagation operations. During training, it evaluates scores based on parameter changes to decide whether to drop layers, scaling down the network and speeding up training. Unlike prior methods that compress networks for inference or limit backpropagation, Learn&Drop uniquely reduces forward pass computations. Validated on VGG and ResNet architectures using MNIST, CIFAR-10, and Imagenette datasets, the method significantly decreases training time while maintaining accuracy.
Key facts
- Learn&Drop reduces forward propagation operations during training.
- It evaluates scores based on layer parameter changes to decide layer dropping.
- Unlike methods compressing for inference or limiting backpropagation, it targets forward pass.
- Validated on VGG and ResNet architectures.
- Tested on MNIST, CIFAR-10, and Imagenette datasets.
- Training time is reduced while maintaining accuracy.
- Proposed method scales down network during training.
- Focuses on reducing operations in forward propagation.
Entities
—