Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods
Abstract
A visual analytics system, Unlearning Comparator, facilitates the evaluation of Machine Unlearning methods by comparing model behaviors and simulating membership inference attacks to assess privacy.
Machine Unlearning (MU) aims to remove target training data from a trained model so that the removed data no longer influences the model's behavior, fulfilling "right to be forgotten" obligations under data privacy laws. Yet, we observe that researchers in this rapidly emerging field face challenges in analyzing and understanding the behavior of different MU methods, especially in terms of three fundamental principles in MU: accuracy, efficiency, and privacy. Consequently, they often rely on aggregate metrics and ad-hoc evaluations, making it difficult to accurately assess the trade-offs between methods. To fill this gap, we introduce a visual analytics system, Unlearning Comparator, designed to facilitate the systematic evaluation of MU methods. Our system supports two important tasks in the evaluation process: model comparison and attack simulation. First, it allows the user to compare the behaviors of two models, such as a model generated by a certain method and a retrained baseline, at class-, instance-, and layer-levels to better understand the changes made after unlearning. Second, our system simulates membership inference attacks (MIAs) to evaluate the privacy of a method, where an attacker attempts to determine whether specific data samples were part of the original training set. We evaluate our system through a case study visually analyzing prominent MU methods and demonstrate that it helps the user not only understand model behaviors but also gain insights that can inform the improvement of MU methods.
Community
Our system helps researchers intuitively understand the trade-offs between different unlearning techniques in terms of accuracy, efficiency, and privacy. It features side-by-side model comparisons, layer-wise analysis, and interactive attack simulations to reveal insights that aggregate metrics often miss.
- Interactive Demo: Unlearning Comparator
- System Introduction: Youtube
- GitHub: Code
If you find this interesting, please give it a star on GitHub!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Reminiscence Attack on Residuals: Exploiting Approximate Machine Unlearning for Privacy (2025)
- On the Necessity of Output Distribution Reweighting for Effective Class Unlearning (2025)
- OPC: One-Point-Contraction Unlearning Toward Deep Feature Forgetting (2025)
- LoReUn: Data Itself Implicitly Provides Cues to Improve Machine Unlearning (2025)
- NOVO: Unlearning-Compliant Vision Transformers (2025)
- WSS-CL: Weight Saliency Soft-Guided Contrastive Learning for Efficient Machine Unlearning Image Classification (2025)
- IMU: Influence-guided Machine Unlearning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper