FETA: Training-Free Time Series Classification via LLM Agents
FETA, a novel multi-agent framework, facilitates time series classification without the need for training by leveraging reasoning LLMs to match query data with labeled examples that share structural similarities. The method breaks down multivariate series into subproblems specific to each channel, retrieves representative examples for those channels, and combines the labels at the channel level, applying confidence weighting. This strategy not only removes the necessity for pretraining or fine-tuning but also boosts efficiency by eliminating non-essential channels and improves the interpretability of the results.
Key facts
- FETA is a multi-agent framework for training-free time series classification.
- It uses exemplar-based in-context reasoning with LLMs.
- Multivariate series are decomposed into channel-wise subproblems.
- Structurally similar labeled examples are retrieved for each channel.
- A reasoning LLM produces channel-level labels with self-assessed confidences.
- A confidence-weighted aggregator fuses all channel decisions.
- No pretraining or fine-tuning is required.
- Efficiency is improved by pruning irrelevant channels and controlling input length.
Entities
—