CAP-like Trilemma for LLMs: Correctness, Non-bias, Utility
Drawing inspiration from the CAP theorem related to distributed systems, a recent paper on arXiv (2605.11672) introduces a trilemma specifically for Large Language Models (LLMs). It highlights that due to semantic underdetermination—where the input does not lead to a unique answer—an LLM cannot ensure strong correctness, strict non-bias, and high utility all at once. To generate a meaningful and definitive answer, the model must establish a selection criterion, preference, or value hierarchy. If such a criterion is absent or not justified by the premises, the response risks becoming biased in a general selection-theoretic context. While avoiding unsupported preferences may help maintain correctness and non-bias, it could compromise utility.
Key facts
- Paper on arXiv: 2605.11672
- Proposes CAP-like trilemma for LLMs
- Trilemma: correctness, non-bias, utility
- Under semantic underdetermination
- Model must introduce selection criterion for decisive response
- Unsupported preferences lead to bias
- Avoiding preferences may reduce utility
- Inspired by CAP theorem for distributed systems
Entities
Institutions
- arXiv