Study Analyzes AI-Generated Competency Questions for Ontology Engineering
A new study investigates the use of generative AI to automate the creation of Competency Questions (CQs), which are natural language questions that define requirements in ontology engineering. Traditionally, CQs are developed manually by ontology engineers and domain experts through a human-centered process. The research, detailed in arXiv:2604.16258v1, aims to characterize the properties of CQs produced by large language models (LLMs), including aspects like readability and structural complexity. By employing a systematic, cross-domain analysis, the study introduces quantitative measures to evaluate these AI-generated questions. This approach seeks to democratize ontology engineering by enabling scalable CQ generation, broadening stakeholder engagement, and increasing access to the field. The analysis considers the diverse landscape of LLMs, which vary in parameters, task specialization, and accessibility. The findings could impact how AI tools are integrated into technical domains, though the paper does not specify application dates or locations.
Key facts
- Competency Questions (CQs) are used for requirement elicitation in ontology engineering
- CQs are traditionally modeled manually by ontology engineers and domain experts
- Generative AI automates CQ creation at scale
- The study characterizes properties of LLM-generated CQs, such as readability and structural complexity
- The research uses a systematic, cross-domain analysis
- Quantitative measures are introduced to evaluate AI-generated CQs
- The goal is to democratize ontology engineering and broaden stakeholder engagement
- The study considers variations in LLMs, including parameter scale and domain specialization
Entities
—