ARTFEED — Contemporary Art Intelligence

LLM+ASP: Self-Correction Enables Task-Agnostic Nonmonotonic Reasoning

ai-technology · 2026-05-01

A novel framework named LLM+ASP has been developed to convert natural language into Answer Set Programming (ASP), which is a nonmonotonic formalism grounded in stable model semantics. This innovation allows large language models (LLMs) to engage in defeasible reasoning without the need for task-specific engineering. In contrast to earlier methods that depend on monotonic logics like SMT, which fall short in representing defeasible reasoning, LLM+ASP provides a consistent approach across various reasoning tasks. The system incorporates an automated self-correction feature to enhance logical coherence and lower computational expenses. The study, available on arXiv (2604.27960), points out that existing neuro-symbolic techniques often necessitate manually created knowledge modules or specific prompts, which LLM+ASP circumvents.

Key facts

  • LLM+ASP translates natural language into Answer Set Programming (ASP).
  • ASP is a nonmonotonic formalism based on stable model semantics.
  • The framework operates without per-task engineering.
  • It applies uniformly across diverse reasoning tasks.
  • Prior neuro-symbolic methods use monotonic logics like SMT.
  • Monotonic logics cannot represent defeasible reasoning.
  • The system uses automated self-correction.
  • Paper published on arXiv with ID 2604.27960.

Entities

Institutions

  • arXiv

Sources