ARTFEED — Contemporary Art Intelligence

Crystal: LLM-Based Method for Ranking Cited Papers

ai-technology · 2026-04-25

Researchers have introduced Crystal, a novel technique that employs large language models (LLMs) to rank all cited works within a citing document collectively, rather than evaluating each citation separately. This method utilizes the complete context of citations to more effectively identify significant references. In comparison to a previous leading impact classifier, Crystal achieves an improvement of +9.5% in accuracy and +8.3% in F1 on a dataset featuring human-annotated citations. To address the positional bias of LLMs, each ranking is conducted three times in a randomized sequence, with impact labels determined through majority voting. Crystal also enhances efficiency by reducing the number of LLM calls and competes well with an open-source model, making citation impact evaluation scalable and cost-effective.

Key facts

  • Crystal jointly ranks all cited papers within a citing paper using LLMs.
  • It outperforms prior state-of-the-art by +9.5% accuracy and +8.3% F1.
  • Positional bias is mitigated by ranking each list three times in randomized order.
  • Impact labels are aggregated through majority voting.
  • Crystal uses fewer LLM calls than previous methods.
  • It performs competitively with an open-source model.
  • The method leverages full citation context for more reliable impact distinction.
  • Dataset of human-annotated citations was used for evaluation.

Entities

Sources