New Manifold Learning Algorithms in Reproducing Kernel Hilbert Spaces
A recent preprint on arXiv introduces algorithms aimed at reconstruction-based manifold learning in Reproducing Kernel Hilbert Spaces (RKHS). In this framework, each observation is reconstituted as a linear combination of other samples within the RKHS, utilizing a vector representation of the Representer Theorem. The methodology is enhanced by a separable operator-valued kernel, which accommodates vector-valued data using a unified scalar similarity function. Following this, a kernel-alignment task facilitates the projection of data into a lower-dimensional latent space, aligning the high-dimensional reconstruction kernel to transfer the geometry of auto-reconstruction into the embedding. This research is driven by the need for effective representation learning in high-dimensional datasets.
Key facts
- Proposes algorithms for reconstruction-based manifold learning in RKHS
- Each observation reconstructed as linear combination of other samples in RKHS
- Uses vector form of Representer Theorem for autorepresentation
- Separable operator-valued kernel extends to vector-valued data
- Kernel-alignment task projects data into lower-dimensional latent space
- Aims to match Gram matrix of embedding with high-dimensional reconstruction kernel
- Transfers auto-reconstruction geometry of RKHS to embedding
- Published on arXiv with ID 2601.05811
Entities
Institutions
- arXiv