RETUYT-INCO Uses Meta-Prompting for German Short Answer Scoring at BEA 2026
The RETUYT-INCO team participated in the BEA 2026 shared task on rubric-based short answer scoring for German, competing in tracks 1, 3, and 4. They developed a meta-prompting method where an LLM generates a custom prompt from training examples to grade new answers. Other approaches included classic machine learning, fine-tuning open-source LLMs, and various prompting techniques. Official results placed them 6th out of 8 in Track 1 (QWK 0.729), 4th out of 9 in Track 3 (QWK 0.674), and they also placed in Track 4.
Key facts
- RETUYT-INCO participated in BEA 2026 shared task
- Task focused on rubric-based short answer scoring for German
- Team competed in Track 1 (Unseen answers three-way), Track 3 (Unseen answers two-way), and Track 4 (Unseen questions two-way)
- Developed meta-prompting method using LLM to generate custom prompts
- Also used classic machine learning, fine-tuning open-source LLMs, and prompting techniques
- Placed 6th out of 8 in Track 1 with QWK 0.729
- Placed 4th out of 9 in Track 3 with QWK 0.674
- Paper published on arXiv with ID 2605.11242
Entities
Institutions
- RETUYT-INCO
- BEA 2026