DiZiNER: Zero-Shot NER via Disagreement-Guided Instruction Refinement
The recently introduced DiZiNER framework (Disagreement-guided Instruction Refinement via Pilot Annotation Simulation) enhances zero-shot named entity recognition (NER) by mimicking the pilot annotation method utilized in human annotation. Various diverse large language models (LLMs) collaborate to annotate common texts, while a supervisory model evaluates discrepancies between models to improve task instructions. This innovative strategy tackles ongoing systematic mistakes in LLM-driven NER, attaining cutting-edge zero-shot performance across 18 benchmarks. This research is documented on arXiv (2604.15866) and marks a notable progression in the field of zero-shot information extraction.
Key facts
- DiZiNER simulates pilot annotation process for zero-shot NER
- Multiple heterogeneous LLMs act as annotators and supervisors
- Supervisor model analyzes inter-model disagreements to refine instructions
- Achieves zero-shot SOTA results on 18 benchmarks
- Addresses persistent systematic errors in LLM-based NER
- Published on arXiv with ID 2604.15866
- Motivated by analogy to human annotation disagreement resolution
- Advances zero-shot information extraction
Entities
Institutions
- arXiv