ARTFEED — Contemporary Art Intelligence

Depth Pruning in LLMs: Calibration Objectives Matter More Than Search Algorithms

ai-technology · 2026-04-30

A recent investigation disputes the belief that redundancy in layers of large language models is an essential characteristic of their structure. Researchers take a functional viewpoint, suggesting that both the model and the evaluation goal jointly shape this redundancy. Their empirical study, which spans three families of LLMs, two calibration goals, and seven search algorithms, reveals that varying objectives lead to distinct types of redundant layers. Rankings based on perplexity and downstream accuracy do not consistently correspond. When a specific objective is set, search algorithms tend to yield comparable results. Notably, the calibration goal may have a greater impact than the selection of the search algorithm. This research is available on arXiv with ID 2604.24938.

Key facts

  • Depth pruning removes Transformer blocks to improve inference efficiency.
  • Prior work treated layer redundancy as inherent structural property.
  • Study adopts functional perspective: redundancy depends on model and evaluation objective.
  • Analyzed three LLM families, two calibration objectives, seven search algorithms.
  • Different objectives yield qualitatively different redundant layers.
  • Perplexity and downstream accuracy rankings do not consistently align.
  • Under fixed objective, search algorithms produce similar solutions.
  • Calibration objective may be more influential than search algorithm choice.

Entities

Institutions

  • arXiv

Sources