GeoKAN: Geometry-Aware Kolmogorov-Arnold Networks Introduced
A recent preprint on arXiv (2605.06740v1) has unveiled a novel set of neural network models known as Geometric Kolmogorov-Arnold Networks (GeoKANs). Unlike traditional fixed Euclidean input coordinates, GeoKANs utilize learned, geometry-adapted coordinates for approximation by employing a diagonal Riemannian metric that modifies the input prior to basis expansion and feature mixing. This geometric inductive bias influences local length scaling, volume distortion, and the differential structure perceived by the model in physics-informed contexts. The study introduces three primary variants: GeoKAN-NNMetric, GeoKAN-γ, and LM-KAN, with LM-KAN further offering three basis-specific adaptations: LM-KAN-RBF, LM-KAN-Wav, and LM-KAN-Fourier, facilitating the exploration of geometry-aware KAN models as general function approximators and surrogates in physics-informed learning.
Key facts
- GeoKANs are a family of geometry-aware KAN-type models.
- Approximation is carried out in learned, geometry-adapted coordinates.
- GeoKAN learns a diagonal Riemannian metric that warps the input.
- The metric provides geometric inductive bias through local length scaling and volume distortion.
- Three main variants: GeoKAN-NNMetric, GeoKAN-γ, and LM-KAN.
- LM-KAN has three basis-specific versions: RBF, Wav, and Fourier.
- GeoKANs are studied as general function approximators and physics-informed surrogates.
- The preprint is available on arXiv with ID 2605.06740.
Entities
Institutions
- arXiv