Spaces:
Running
on
Zero
Running
on
Zero
amildravid4292
commited on
Delete README.md
Browse files
README.md
DELETED
@@ -1,72 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: W2W Demo
|
3 |
-
emoji: 🏋️
|
4 |
-
colorFrom: yellow
|
5 |
-
colorTo: green
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 4.31.5
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
# Interpreting the Weight Space of Customized Diffusion Models
|
13 |
-
[[paper](https://arxiv.org/abs/2306.09346)] [[project page](https://snap-research.github.io/weights2weights/)]
|
14 |
-
|
15 |
-
Official implementation of the paper "Interpreting the Weight Space of Customized Diffusion Models."
|
16 |
-
|
17 |
-
<img src="./assets/teaser.jpg" alt="teaser" width="800"/>
|
18 |
-
|
19 |
-
>We investigate the space of weights spanned by a large collection of customized diffusion models. We populate this space by creating a dataset of over 60,000 models, each of which is fine-tuned to insert a different person’s visual identity. Next, we model the underlying manifold of these weights as a subspace, which we term <em>weights2weights</em>. We demonstrate three immediate applications of this space -- sampling, editing, and inversion. First, as each point in the space corresponds to an identity, sampling a set of weights from it results in a model encoding a novel identity. Next, we find linear directions in this space corresponding to semantic edits of the identity (e.g., adding a beard). These edits persist in appearance across generated samples. Finally, we show that inverting a single image into this space reconstructs a realistic identity, even if the input image is out of distribution (e.g., a painting). Our results indicate that the weight space of fine-tuned diffusion models behaves as an interpretable latent space of identities.
|
20 |
-
|
21 |
-
## Setup
|
22 |
-
### Environment
|
23 |
-
Our code is developed in `PyTorch 2.3.0` with `CUDA 12.1`, `torchvision=0.18.0`, and `python=3.12.3`.
|
24 |
-
|
25 |
-
To replicate our environment, install [Anaconda](https://docs.anaconda.com/free/anaconda/install/index.html), and run the following commands.
|
26 |
-
```
|
27 |
-
$ conda env create -f w2w.yml
|
28 |
-
$ conda activate w2w
|
29 |
-
```
|
30 |
-
|
31 |
-
Alternatively, you can follow the setup from [PEFT](https://huggingface.co/docs/peft/main/en/task_guides/dreambooth_lora).
|
32 |
-
### Files
|
33 |
-
The files needed to create *w2w* space, load models, train classifiers, etc. can be downloaded at this [link](https://drive.google.com/file/d/1W1_klpdeCZr5b0Kdp7SaS7veDV2ZzfbB/view?usp=sharing). Keep the folder structure and place it into the `weights2weights` folder containing all the code.
|
34 |
-
|
35 |
-
The dataset of full model weights (i.e. the full Dreambooth LoRA parameters) will be released within the next week (by June 21).
|
36 |
-
|
37 |
-
## Sampling
|
38 |
-
We provide an interactive notebook for sampling new identity-encoding models from *w2w* space in `sampling/sampling.ipynb`. Instructions are provided in the notebook. Once a model is sampled, you can run typical inference with various text prompts and generation seeds as with a typical personalized model.
|
39 |
-
|
40 |
-
## Inversion
|
41 |
-
We provide an interactive notebook for inverting a single image into a model in *w2w* space in `inversion/inversion_real.ipynb`. Instructions are provided in the notebook. We provide another notebook that with an example of inverting an out-of-distribution identity in `inversion/inversion_ood.ipynb`. Assets for these notebooks are provided in `inversion/images/` and you can place your own assets in there.
|
42 |
-
|
43 |
-
Additionally, we provide an example script `run_inversion.sh` for running the inversion in `invert.py`. You can run the command:
|
44 |
-
```
|
45 |
-
$ bash inversion/run_inversion.sh
|
46 |
-
```
|
47 |
-
The details on the various arguments are provided in `invert.py`.
|
48 |
-
|
49 |
-
## Editing
|
50 |
-
We provide an interactive notebook for editing the identity encoded in a model in `editing/identity_editing.ipynb`. Instructions are provided in the notebook. Another notebook is provided which shows how to compose multiple attribute edits together in `editing/multiple_edits.ipynb`.
|
51 |
-
|
52 |
-
## Loading and Saving Models
|
53 |
-
Various notebooks provide examples on how to save models either as low dimensional *w2w* models (represented by principal component coefficients), or as models compatible with standard LoRA such as with Diffusers [pipelines](https://huggingface.co/docs/diffusers/en/api/pipelines/overview). We provide a notebook in `other/loading.ipynb`that demonstrates how these weights can be loaded into either format.
|
54 |
-
|
55 |
-
## Acknowledgments
|
56 |
-
Our code is based on implementations from the following repos:
|
57 |
-
|
58 |
-
>* [PEFT](https://github.com/huggingface/peft)
|
59 |
-
>* [Concept Sliders](https://github.com/rohitgandikota/sliders)
|
60 |
-
>* [Diffusers](https://github.com/huggingface/diffusers)
|
61 |
-
|
62 |
-
|
63 |
-
## Citation
|
64 |
-
If you found this repository useful please consider starring ⭐ and citing:
|
65 |
-
```
|
66 |
-
@misc{dravid2024interpreting,
|
67 |
-
title={Interpreting the Weight Space of Customized Diffusion Models},
|
68 |
-
author={Amil Dravid and Yossi Gandelsman and Kuan-Chieh Wang and Rameen Abdal and Gordon Wetzstein and Alexei A. Efros and Kfir Aberman},
|
69 |
-
year={2024},
|
70 |
-
eprint={2406.09413}
|
71 |
-
}
|
72 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|