Papers
arxiv:2508.12730

Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods

Published on Aug 18
· Submitted by jaeunglee on Aug 19
Authors:
,
,
,

Abstract

A visual analytics system, Unlearning Comparator, facilitates the evaluation of Machine Unlearning methods by comparing model behaviors and simulating membership inference attacks to assess privacy.

AI-generated summary

Machine Unlearning (MU) aims to remove target training data from a trained model so that the removed data no longer influences the model's behavior, fulfilling "right to be forgotten" obligations under data privacy laws. Yet, we observe that researchers in this rapidly emerging field face challenges in analyzing and understanding the behavior of different MU methods, especially in terms of three fundamental principles in MU: accuracy, efficiency, and privacy. Consequently, they often rely on aggregate metrics and ad-hoc evaluations, making it difficult to accurately assess the trade-offs between methods. To fill this gap, we introduce a visual analytics system, Unlearning Comparator, designed to facilitate the systematic evaluation of MU methods. Our system supports two important tasks in the evaluation process: model comparison and attack simulation. First, it allows the user to compare the behaviors of two models, such as a model generated by a certain method and a retrained baseline, at class-, instance-, and layer-levels to better understand the changes made after unlearning. Second, our system simulates membership inference attacks (MIAs) to evaluate the privacy of a method, where an attacker attempts to determine whether specific data samples were part of the original training set. We evaluate our system through a case study visually analyzing prominent MU methods and demonstrate that it helps the user not only understand model behaviors but also gain insights that can inform the improvement of MU methods.

Community

Paper author Paper submitter
edited 2 days ago

unlearning_comparator_teaser.gif

Our system helps researchers intuitively understand the trade-offs between different unlearning techniques in terms of accuracy, efficiency, and privacy. It features side-by-side model comparisons, layer-wise analysis, and interactive attack simulations to reveal insights that aggregate metrics often miss.

If you find this interesting, please give it a star on GitHub!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.12730 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.