ICU-Bench: Benchmarking Continual Unlearning in Multimodal LLMs
Researchers have introduced a new benchmark called ICU-Bench aimed at improving ongoing multimodal unlearning in large language models. This tool addresses privacy concerns by evaluating models through various deletion requests. It includes 1,000 profiles with sensitive data from medical records and labor contracts, in addition to 9,500 images, 16,000 question-answer pairs, and 100 tasks focused on forgetting information. To measure effectiveness in forgetting, historical preservation, maintained utility, and stability, the team has created innovative metrics. The findings have been published on arXiv for public access.
Key facts
- ICU-Bench is a continual multimodal unlearning benchmark.
- It focuses on privacy-critical document data.
- The benchmark includes 1,000 profiles from medical reports and labor contracts.
- It contains 9,500 images, 16,000 QA pairs, and 100 forget tasks.
- New metrics assess forgetting effectiveness, historical preservation, retained utility, and stability.
- The research addresses privacy concerns in Multimodal Large Language Models (MLLMs).
- Existing benchmarks lack support for continual privacy deletion requests.
- The paper is available on arXiv with ID 2605.05938.
Entities
Institutions
- arXiv