ARTFEED — Contemporary Art Intelligence

AI reliance violates Grice's Maxim of Quality, warns paper

other · 2026-05-01

A preprint on arXiv warns that the public's uncritical reliance on large language models (LLMs) for financial, legal, and medical advice violates Grice's Maxim of Quality and Lemoine's legal Maxim of Innocence. The paper, updated in 2025, notes that users often accept AI output without logical or empirical verification, risking Type II errors in plagiarism detection and the fallacy of affirming the consequent. The authors caution that even models with strong ground truth or symbolic reasoning remain uncertain, and that blind acceptance of AI answers constitutes a failure to question output.

Key facts

  • arXiv:2304.14352v2 is a preprint on AI epistemic risks.
  • The paper was updated in 2025 (replace-cross).
  • LLMs are used for financial, legal, and medical consultation.
  • Users often accept AI advice without verification.
  • Reliance on AI violates Grice's Maxim of Quality.
  • Reliance on AI violates Lemoine's Maxim of Innocence.
  • Low-sensitivity plagiarism scanners may produce Type II errors.
  • The fallacy of affirming the consequent occurs when failure to detect difference is accepted.

Entities

Institutions

  • arXiv

Sources