ARTFEED — Contemporary Art Intelligence

Ensemble Gemma Models Achieve Top-2 in Multilingual Polarization Detection

ai-technology · 2026-05-07

A team of researchers has created a system for SemEval-2026 Task 9: Multilingual Polarization Detection, which involves binary classification in 22 languages. Their methodology involves fine-tuning individual Gemma 3 models (with 12B and 27B parameters) for each language through Low-Rank Adaptation (LoRA), enhanced by synthetic data generated from GPT-4o-mini using three methods: direct generation, paraphrasing, and creating contrastive pairs. They implemented a multi-stage quality filtering process that included deduplication based on embeddings. By adjusting thresholds for each language on the development set, they achieved F1 improvements of 2–4% without retraining. The weighted ensembles of predictions from the 12B and 27B models, along with language-specific strategy selection, resulted in a mean macro-F1 score of 0.811, placing them 2nd overall, with top rankings in 3 languages and among the top 3 in 8 languages.

Key facts

  • Task: SemEval-2026 Task 9, multilingual polarization detection, binary classification, 22 languages
  • Models: Fine-tuned Gemma 3 (12B and 27B parameters) per language with LoRA
  • Data augmentation: Synthetic data from GPT-4o-mini via direct generation, paraphrasing, contrastive pair creation
  • Quality filtering: Multi-stage pipeline with embedding-based deduplication
  • Threshold tuning: Per-language on development set, 2–4% F1 improvement
  • Ensemble: Weighted ensembles of 12B and 27B predictions with per-language strategy selection
  • Result: Mean macro-F1 0.811, 2nd overall, 1st in 3 languages, top-3 in 8 languages
  • Source: arXiv:2605.05159

Entities

Institutions

  • SemEval
  • arXiv

Sources