1S-DAug: One-Shot Data Augmentation Method Improves Few-Shot Learning Without Model Updates
The innovative technique known as 1S-DAug significantly boosts few-shot learning by creating varied image representations from a single sample during testing. This method integrates geometric alterations, controlled noise addition, and a denoising diffusion process based on the original image. The resulting images are encoded and combined with the original to form a unified representation, leading to stronger predictions. As a model-agnostic, training-free plugin, 1S-DAug reliably enhances few-shot classification performance across four standard benchmark datasets without modifying model parameters. On the miniImagenet 5-way-1-shot benchmark, it records up to a 20% increase in relative accuracy. Traditional augmentations often struggle in few-shot learning, but this method effectively generates authentic variants that preserve key traits while adding diversity. This research is available on arXiv under identifier 2602.00114v4, categorized as replace-cross.
Key facts
- 1S-DAug is a one-shot generative augmentation operator for few-shot learning
- It synthesizes diverse yet faithful image variants from a single example at test time
- Combines geometric perturbations with controlled noise injection and denoising diffusion
- Generated images are encoded and aggregated with original into combined representation
- Training-free and model-agnostic plugin requiring no parameter updates
- Consistently improves few-shot classification across 4 standard benchmark datasets
- Achieves up to 20% relative accuracy improvement on miniImagenet 5-way-1-shot
- Addresses failure of traditional test-time augmentations in few-shot learning scenarios
Entities
Institutions
- arXiv