LLMs Assist Ontology Learning Through Counter-Concept Verbalization
A recent study presents Large Language Models as a novel element in active learning for OWL ontologies, offering real-world examples that approximate instances of counter-concepts. The methodology reformulates candidate axioms into related counter-concepts, articulating them in controlled natural language prior to their introduction to LLMs. This framework guarantees that only Type II errors could arise in ontology modeling, which, at worst, may only delay construction without causing inconsistencies. Results from experiments involving 13 commercial LLMs indicate that recall, reflecting Type II errors, validates the method's efficacy. This research was published on arXiv under identifier 2604.16672v1. In active learning, membership queries enable learners to ask questions like 'Is every apple a fruit?' to which teachers provide accurate answers, functioning as subsumption tests relative to target ontologies.
Key facts
- Large Language Models are introduced as a third component in active learning for OWL ontologies
- The method reformulates candidate axioms into counter-concepts and verbalizes them in controlled natural language
- Experimental results cover 13 commercial LLMs
- The design ensures only Type II errors may occur in ontology modeling
- Type II errors in worst cases merely delay construction without introducing inconsistencies
- Membership queries in active learning allow posing questions like 'Is every apple a fruit?'
- Membership queries can be viewed as subsumption tests with respect to target ontologies
- The research was announced on arXiv with identifier 2604.16672v1
Entities
Institutions
- arXiv