ARTFEED — Contemporary Art Intelligence

LoMETab: Rank-r Generalization of Multiplicative Implicit Ensembles for Tabular Deep Learning

other · 2026-05-16

A recent publication on arXiv introduces LoMETab, a rank-r extension of multiplicative implicit ensembles designed for tabular deep learning. This approach enhances the rank-1 BatchEnsemble/TabM modulation into a rank-r identity-residual Hadamard family by defining each member's weight as W_k = W ⊙ (1 + A_k B_k^T), where W is common and (A_k, B_k) represent unique low-rank factors for each member. This framework reveals two significant axes for controlling diversity: the adapter rank r and the initialization scale σ_init. The authors demonstrate that for r ≥ 2, this extension significantly broadens BatchEnsemble's hypothesis space. The study also tackles recent tabular learning benchmarks, which indicate a close performance cluster among top methods, including gradient boosted decision trees, attention-based architectures, and implicit ensembles like TabM. As benchmark improvements level off, the focus shifts to understanding and managing the factors that allow basic neural tabular models to remain competitive.

Key facts

  • LoMETab is a rank-r generalization of multiplicative implicit ensembles.
  • It lifts rank-1 BatchEnsemble/TabM modulation to a rank-r identity-residual Hadamard family.
  • Each member weight is parameterized as W_k = W ⊙ (1 + A_k B_k^T).
  • W is shared, (A_k, B_k) are member-specific low-rank factors.
  • Two diversity-control axes: adapter rank r and initialization scale σ_init.
  • For r ≥ 2, the generalization strictly enlarges BatchEnsemble's hypothesis space.
  • Recent tabular learning benchmarks show a tight performance cluster.
  • The work aims to understand mechanisms making simple neural tabular models competitive.

Entities

Institutions

  • arXiv

Sources