File size: 2,017 Bytes
8173ae1
d177f6b
8173ae1
 
 
 
 
 
 
 
 
 
 
 
d177f6b
8173ae1
d177f6b
 
8173ae1
 
 
 
d177f6b
8173ae1
d177f6b
8173ae1
 
 
d177f6b
8173ae1
 
 
 
 
 
 
 
 
d177f6b
8173ae1
 
 
 
d177f6b
 
 
 
 
8173ae1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
title: Ablating Concepts in Text-to-Image Diffusion Models 
emoji: 💡
colorFrom: indigo
colorTo: gray
sdk: gradio
sdk_version: 3.21.0
app_file: app.py
pinned: false
license: mit
---



# Ablating Concepts in Text-to-Image Diffusion Models

 Project Website    [https://www.cs.cmu.edu/~concept-ablation/](https://www.cs.cmu.edu/~concept-ablation/) <br>
 Arxiv Preprint     [https://arxiv.org/abs/2303.13516](https://arxiv.org/abs/2303.13516) <br>
<div align='center'>
<img src = 'images/applications.png'>
</div>

Large-scale text-to-image diffusion models can generate high-fidelity images with powerful compositional ability. However, these models are typically trained on an enormous amount of Internet data, often containing copyrighted material, licensed images, and personal photos. Furthermore, they have been found to replicate the style of various living artists or memorize exact training samples. How can we remove such copyrighted concepts or images without retraining the model from scratch?

We propose an efficient method of ablating concepts in the pretrained model, i.e., preventing the generation of a target concept. Our algorithm learns to match the image distribution for a given target style, instance, or text prompt we wish to ablate to the distribution corresponding to an anchor concept, e.g., Grumpy Cat to Cats.

## Demo vs github

This demo uses different hyper-parameters than the github version for faster training.

## Running locally

1.) Create an environment using the packages included in the requirements.txt file

2.) Run `python app.py`

3.) Open the application in browser at `http://127.0.0.1:7860/`

4.) Train, evaluate, and save models

## Citing our work
The preprint can be cited as follows
```
@inproceedings{kumari2023conceptablation,
  author = {Kumari, Nupur and Zhang, Bingliang and Wang, Sheng-Yu and Shechtman, Eli and Zhang, Richard and Zhu, Jun-Yan},
  title = {Ablating Concepts in Text-to-Image Diffusion Models},
  booktitle = ICCV,
  year = {2023},
}
```