ARTFEED — Contemporary Art Intelligence

Quantum Transformers Fail to Outperform Simpler VQC Architectures on Tabular Data

publication · 2026-05-01

A recent study has analyzed four types of variational quantum circuits (VQC) in comparison to traditional tabular models, and the results show that simpler designs can hold their own against more complex attention-based models. This research, found on arXiv (2604.23931), looked at fully-connected (FC-VQC), residual (ResNet-VQC), hybrid quantum-classical transformers (QT), and fully quantum transformers (FQT) across five tasks in both regression and classification. Notably, FC-VQCs achieved 90-96% of the R² that attention-based VQCs did, all while using 40-50% fewer parameters. For instance, in the Boston Housing dataset, FC-VQC attained a mean R² of 0.829, outperforming an MLP with the same capacity, which had a score of 0.753. This suggests that the complexity of quantum transformers may not be justified for tabular data.

Key facts

  • arXiv paper 2604.23931 compares four VQC families on tabular benchmarks.
  • FC-VQC achieves 90-96% of attention-based VQC R² with 40-50% fewer parameters.
  • On Boston Housing, FC-VQC mean R²=0.829 vs MLP720's 0.753.
  • FC-VQC Type 4 connectivity approximates attention via cross-token mixing.
  • Explicit quantum self-attention yields only marginal gains.
  • Study covers five regression and classification benchmarks.
  • ResNet-VQC, QT, and FQT also evaluated.
  • Results question the necessity of quantum transformers for tabular data.

Entities

Institutions

  • arXiv

Sources