Meta Additive Model for Robust Sparse Learning
A novel meta additive model (MAM) utilizing bilevel optimization overcomes the shortcomings of current sparse additive models when analyzing high-dimensional data. Conventional models struggle under mean-squared error in the presence of intricate noise, such as non-Gaussian disturbances, outliers, noisy labels, and unbalanced categories. Existing sample reweighting techniques depend on predetermined weighting functions and require the manual selection of hyperparameters. In contrast, MAM autonomously determines the weighting of individual losses by parameterizing the weighting function through a multi-layer perceptron (MLP) trained on meta data, thus ensuring strong performance across diverse learning tasks without the need for manual adjustments.
Key facts
- arXiv:2604.20111v1
- Announce Type: cross
- Sparse additive models are used in high-dimensional data analysis
- Existing models limited to single-level learning under mean-squared error
- Performance degrades with non-Gaussian perturbations, outliers, noisy labels, imbalanced categories
- Sample reweighting strategy reduces sensitivity to atypical data
- MAM uses bilevel optimization framework
- Weighting function parameterized via MLP trained on meta data
Entities
Institutions
- arXiv