- [2024/07] UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI
- [2024/06] Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces
- [2024/06] RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
- [2024/06] REVS: Unlearning Sensitive Information in Language Models via Rank Editing in the Vocabulary Space
- [2024/05] Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
- [2024/04] Espresso: Robust Concept Filtering in Text-to-Image Models
- [2024/04] Machine Unlearning in Large Language Models
- [2024/04] LMEraser: Large Model Unlearning through Adaptive Prompt Tuning
- [2024/04] Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning
- [2024/04] Digital Forgetting in Large Language Models: A Survey of Unlearning Methods
- [2024/03] MACE: Mass Concept Erasure in Diffusion Models
- [2024/03] Localizing Paragraph Memorization in Language Models
- [2024/03] Towards Efficient and Effective Unlearning of Large Language Models for Recommendation
- [2024/03] Second-Order Information Matters: Revisiting Machine Unlearning for Large Language Models
- [2024/03] Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention
- [2024/03] Guardrail Baselines for Unlearning in LLMs
- [2024/03] Dissecting Language Models: Machine Unlearning via Selective Pruning
- [2024/02] Unmemorization in Large Language Models via Self-Distillation and Deliberate Imagination
- [2024/02] Unlearnable Algorithms for In-context Learning
- [2024/02] Towards Safer Large Language Models through Machine Unlearning
- [2024/02] Machine Unlearning of Pre-trained Large Language Models
- [2024/02] Rethinking Machine Unlearning for Large Language Models
- [2024/02] Selective Forgetting: Advancing Machine Unlearning Techniques and Evaluation in Language Models
- [2024/02] In-Context Learning Can Re-learn Forbidden Tasks
- [2024/02] Machine Unlearning for Image-to-Image Generative Models
- [2024/01] TOFU: A Task of Fictitious Unlearning for LLMs
- [2023/10] In-Context Unlearning: Language Models as Few Shot Unlearners
- [2023/10] Large Language Model Unlearning
- [2023/10] Unlearn What You Want to Forget: Efficient Unlearning for LLMs
- [2023/10] Who's Harry Potter? Approximate Unlearning in LLMs
- [2023/09] Can Sensitive Information be Deleted From LLMs? Objectives for Defending Against Extraction Attacks
- [2023/09] Detecting Pretraining Data from Large Language Models
- [2023/09] Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models?
- [2023/09] SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation
- [2023/07] Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions
- [2023/03] Erasing Concepts from Diffusion Models