ARTFEED — Contemporary Art Intelligence

ST-PT Framework Transforms Transformer into Programmable Factor Graph for Time Series

other · 2026-04-30

The Probabilistic Transformer (PT) framework reinterprets the Transformer's self-attention and feed-forward block as Mean-Field Variational Inference on a Conditional Random Field, turning the model into a programmable factor graph with explicit topology, potentials, and message-passing schedule. Originally designed for natural language, PT is now extended to time series via the Spatial-Temporal Probabilistic Transformer (ST-PT), which adds a channel axis and improves per-step semantics. The report identifies three properties of PT/ST-PT as a factor-graph model and poses three corresponding research questions to explore its potential. The work is published on arXiv with ID 2604.26762.

Key facts

  • Probabilistic Transformer (PT) equates self-attention plus feed-forward block to Mean-Field Variational Inference on a Conditional Random Field.
  • PT transforms Transformer from black-box neural network to programmable factor graph.
  • Graph topology, factor potentials, and message-passing schedule are explicit and inspectable.
  • PT was originally developed for natural language processing.
  • ST-PT extends PT to time series by adding a channel axis and improving per-step semantics.
  • ST-PT serves as a shared cornerstone backbone for time series modeling.
  • Three distinct properties of PT/ST-PT as a factor-graph model are identified.
  • Three research questions are derived, one per property, to probe each property's potential.

Entities

Institutions

  • arXiv

Sources