New AI Safety Framework Links Thermodynamics to Autonomous System Control
The Kerimov-Alekberli model presents a novel information-geometric approach to AI safety, establishing a formal link between stochastic control and non-equilibrium thermodynamics to promote ethical alignment in autonomous systems. It characterizes systemic anomalies as divergences from a Riemannian manifold, employing Kullback-Leibler divergence as the main metric, with a dynamic threshold based on the Fisher Information Metric. Based on the Landauer Principle, the model demonstrates that adversarial perturbations can perform quantifiable physical work by raising informational entropy. Testing on the NSL-KDD dataset and simulations of unmanned aerial vehicle trajectories confirmed efficient real-time detection through the FPT trigger, yielding robust performance metrics. This framework reinterprets AI safety by creating a formal isomorphism between control theory and thermodynamics.
Key facts
- The Kerimov-Alekberli model links non-equilibrium thermodynamics to stochastic control for AI safety.
- Systemic anomalies are defined as deviations from a Riemannian manifold.
- Kullback-Leibler divergence is the primary metric with a dynamic threshold from the Fisher Information Metric.
- The framework is grounded in the Landauer Principle.
- Adversarial perturbations are shown to perform measurable physical work by increasing informational entropy.
- Validation was performed on the NSL-KDD dataset and unmanned aerial vehicle trajectory simulations.
- Real-time detection was achieved via the FPT trigger with strong performance metrics.
- The model provides a formal isomorphism between non-equilibrium thermodynamics and stochastic control.
Entities
—