ARTFEED — Contemporary Art Intelligence

BADIT: Decomposing LLM Abilities to Reduce Cross-Task Interference

ai-technology · 2026-05-09

A new paper on arXiv (2605.05676) proposes Basic Abilities Decomposition for multi-task Instruct-Tuning (BADIT) to address cross-task interference in large language models. The authors empirically show that existing solutions like task-specific neuron selection and mixture-of-experts still suffer from interference due to shared parameters. They find that certain parameters are consistently co-activated and organize into base groups, analogizing that LLMs encode orthogonal abilities. BADIT decomposes these basic abilities to mitigate conflicting gradients during multi-task training.

Key facts

  • arXiv paper 2605.05676
  • Title: Decomposing the Basic Abilities of Large Language Models: Mitigating Cross-Task Interference in Multi-Task Instruct-Tuning
  • Proposes BADIT (Basic Abilities Decomposition for multi-task Instruct-Tuning)
  • Cross-task interference arises from conflicting gradients over shared parameters
  • Existing methods: task-specific neuron selection, mixture-of-experts
  • Empirical finding: certain parameters are consistently co-activated
  • Co-activated parameters form base groups
  • Analogizes that LLMs encode orthogonal abilities

Entities

Institutions

  • arXiv

Sources