EFlow: Erasing Concepts in Diffusion Models via GFlowNet-Driven Trajectory Exploration

Description

Erasing harmful or proprietary concepts from powerful text-to-image generators is an emerging safety requirement, yet current “concept erasure” techniques either collapse image quality, rely on brittle adversarial losses, or demand prohibitive retraining cycles. These shortcomings stem from a myopic view

Erasing harmful or proprietary concepts from powerful text-to-image generators is an emerging safety requirement, yet current “concept erasure” techniques either collapse image quality, rely on brittle adversarial losses, or demand prohibitive retraining cycles. These shortcomings stem from a myopic view of the denoising trajectories that govern diffusion-based generation. EFlow introduces the first framework to cast concept unlearning as exploration in the space of denoising paths, optimized via GFlowNets with the trajectory-balance objective. By sampling entire trajectories rather than single end states, EFlow learns a stochastic policy that steers generation away from target concepts while preserving the model’s prior. It eliminates the need for handcrafted reward models, generalizes effectively to unseen concepts, and avoids hackable rewards—all while improving overall performance. Extensive empirical results demonstrate that EFlow outperforms existing baselines and achieves an optimal trade-off between concept removal and prior preservation.

Downloads

Public access restricted until 2027-08-01.

Details

Contributors
Date Created
2025
Embargo Release Date
Topical Subject
Language
  • en
Note
  • Partial requirement for: M.S., Arizona State University, 2025
  • Field of study: Computer Science
Additional Information
English
Extent
  • 61 pages
Open Access
Peer-reviewed