ARTFEED — Contemporary Art Intelligence

Scaling Amortized Inference to Large Observation Sets

other · 2026-05-11

A new method enables neural posterior estimation to scale to arbitrarily large observation sets without prohibitive memory or compute costs. The approach trains a mean-pool Deep Set encoder on sets of size at most two, then fine-tunes the inference head on pre-aggregated embeddings, making training cost essentially independent of deployment set size N. This decouples representation learning from posterior modeling. The technique is demonstrated across scalar, image, and multi-view 3D tasks. The paper is available on arXiv (2605.07972).

Key facts

  • Method trains mean-pool Deep Set on sets of size ≤2
  • Encoder generalizes to arbitrary set sizes
  • Inference head fine-tuned on pre-aggregated embeddings
  • Training cost independent of deployment set size N
  • Demonstrated on scalar, image, multi-view 3D tasks
  • arXiv:2605.07972
  • Neural posterior estimation for amortized inference
  • Decouples representation learning from posterior modeling

Entities

Institutions

  • arXiv

Sources