ARTFEED — Contemporary Art Intelligence

LLMs Tested on High-Level Message Sequence Charts

ai-technology · 2026-05-14

A new study evaluates whether large language models (LLMs) understand the semantics of High-Level Message Sequence Charts (HMSCs), a visual modeling language with formal semantics used in software architecture. Researchers tested three LLMs—Gemini-3, GPT-5.4, and Qwen-3.6—on 129 semantic tasks, ranging from basic event ordering to semantic-preserving abstractions and compositions. The study, published on arXiv (2605.13773), addresses a gap in research on LLM consistency with architectural design specifications. HMSCs underpin Sequence Diagrams in the Unified Modeling Language (UML). Results indicate varying performance across models and tasks, highlighting limitations in LLMs' grasp of formal semantics.

Key facts

  • Study examines LLM understanding of HMSC semantics
  • Three LLMs tested: Gemini-3, GPT-5.4, Qwen-3.6
  • 129 semantic tasks performed
  • Tasks include basic constructs, abstractions, and compositions
  • Published on arXiv with ID 2605.13773
  • HMSCs are visual models with formal semantics
  • HMSCs used as foundation for UML Sequence Diagrams
  • Research addresses under-explored area of architectural design specification

Entities

Institutions

  • arXiv

Sources