Quantum Fine-Tuning of AI Models Achieves 24% Error Reduction
An experimental study released on arXiv details the measurement of energy-to-solution (ETS) for hybrid quantum-classical applications, utilizing direct power instrumentation on a Forte Enterprise trapped-ion quantum processor. This approach is implemented in a pipeline designed for quantum fine-tuning of foundational AI models, with validation conducted end-to-end on quantum hardware. Despite challenges such as noise and a limited number of qubits, the models produced demonstrate accuracy that meets or surpasses classical benchmarks, including logistic regression and support vector classifiers. Findings indicate that energy consumption in the QPU increases roughly linearly with the number of qubits for shallow circuits, while classical simulations show exponential growth, revealing an ETS break-even point at approximately 34 qubits. The top quantum fine-tuned model exhibits a 24% reduction in classification error compared to the leading classical fine-tuned model.
Key facts
- Study measures energy-to-solution of hybrid quantum-classical applications.
- Uses Forte Enterprise trapped-ion quantum processor with direct power instrumentation.
- Applies methodology to quantum fine-tuning of foundational AI models.
- Quantum models achieve accuracy competitive with classical baselines.
- QPU energy scales linearly with qubit number for shallow circuits.
- Classical simulation exhibits exponential energy scaling.
- ETS break-even occurs around 34 qubits.
- Best quantum fine-tuned model improves classification error by 24% over classical.
Entities
Institutions
- arXiv