Contrastive Learning Models for Identification and Generation in the Limit
A new arXiv preprint (2605.06211) introduces contrastive identification and generation in the limit, extending classical learning models. In Gold's 1967 identification in the limit, a learner receives positive examples and must eventually identify a target hypothesis. Kleinberg and Mullainathan's 2024 generation in the limit requires outputting novel elements from the target's support. Both rely on positive-only or fully labeled data. The new work addresses relational supervision signals, where the learner observes unordered pairs {x,y} such that h(x) ≠ h(y) for an unknown binary hypothesis h, without knowing which element is positive. The paper presents three results in the noiseless setting, initiating study of contrastive presentations.
Key facts
- arXiv preprint 2605.06211 introduces contrastive identification and generation in the limit.
- Extends Gold's 1967 identification in the limit model.
- Builds on Kleinberg and Mullainathan's 2024 generation in the limit.
- Learner observes unordered pairs {x,y} with h(x) ≠ h(y).
- Binary hypothesis h is unknown; positive element is hidden.
- Three results presented in the noiseless setting.
- Focuses on relational supervision signals rather than singleton labels.
- Contrastive presentation uses streams of pairs.
Entities
Institutions
- arXiv