ARTFEED — Contemporary Art Intelligence

BoolXLLM: LLM-Assisted Explainability for Boolean Models

ai-technology · 2026-05-13

A new hybrid framework called BoolXLLM integrates Large Language Models into the Boolean rule learning pipeline to improve interpretability. The system augments the BoolXAI classifier at three stages: feature selection, threshold recommendation, and rule translation. LLMs guide domain-relevant variable identification and propose meaningful discretization strategies for numerical features. The goal is to make formal logical rules accessible to non-technical stakeholders. The research is published on arXiv under identifier 2605.12139.

Key facts

  • BoolXLLM is a hybrid framework integrating LLMs into Boolean rule learning.
  • It augments the BoolXAI classifier at three stages: feature selection, threshold recommendation, and rule translation.
  • LLMs guide identification of domain-relevant variables.
  • LLMs propose semantically meaningful discretization strategies for numerical features.
  • The work aims to make formal logical rules accessible to non-technical stakeholders.
  • Published on arXiv with identifier 2605.12139.
  • The paper is classified as a new announcement type.
  • Interpretable machine learning seeks transparent models understandable by humans.

Entities

Institutions

  • arXiv

Sources