ARTFEED — Contemporary Art Intelligence

FairNVT Framework Enhances Fairness in Vision Transformers Through Noise Injection

ai-technology · 2026-04-22

A new debiasing framework called FairNVT has been introduced to improve fairness in pretrained transformer-based encoders while maintaining task accuracy. The approach addresses both representation and prediction level fairness simultaneously, arguing these aspects are inherently connected. By suppressing sensitive information at the representation level, the method facilitates fairer predictions downstream. FairNVT employs lightweight adapters to learn task-relevant and sensitive embeddings separately. Calibrated Gaussian noise is applied to the sensitive embedding before fusing it with the task representation. Orthogonality constraints and fairness regularization work together to reduce sensitive-attribute leakage in learned embeddings. The framework is compatible with various pretrained transformer encoders and has been tested across three datasets spanning vision and language domains. Unlike many existing approaches that treat representation and prediction fairness separately, FairNVT addresses them jointly through its integrated methodology. The paper detailing this framework was announced on arXiv with identifier 2604.16780v1.

Key facts

  • FairNVT is a lightweight debiasing framework for pretrained transformer-based encoders
  • It improves both representation and prediction level fairness while preserving task accuracy
  • The approach argues representation and prediction fairness are inherently connected
  • It uses lightweight adapters to learn task-relevant and sensitive embeddings
  • Calibrated Gaussian noise is applied to sensitive embeddings
  • Orthogonality constraints and fairness regularization reduce sensitive-attribute leakage
  • The framework is compatible with a wide range of pretrained transformer encoders
  • Tested across three datasets spanning vision and language domains

Entities

Institutions

  • arXiv

Sources