Koke_Cacao commited on
Commit
ad07f98
1 Parent(s): 1a29d24

:sparkles: update description

Browse files
README.md CHANGED
@@ -15,7 +15,9 @@ tags:
15
  <img src="https://huggingface.co/KokeCacao/mvdream-hf/resolve/main/doc/image_3.png" height="256">
16
  </p>
17
 
18
- A huggingface implementation of MVDream, used for quick one-line download. See [huggingface repo](https://huggingface.co/KokeCacao/mvdream-hf/tree/main) that hosts sd-v1.5 version. See [github repo](https://github.com/KokeCacao/mvdream-hf) for convertion code.
 
 
19
 
20
  ## Convert Original Weights to Diffusers
21
 
 
15
  <img src="https://huggingface.co/KokeCacao/mvdream-hf/resolve/main/doc/image_3.png" height="256">
16
  </p>
17
 
18
+ A huggingface implementation of MVDream with 4 views, used for quick one-line download. See [huggingface repo](https://huggingface.co/KokeCacao/mvdream-hf/tree/main) that hosts sd-v1.5 version and [huggingface repo](https://huggingface.co/KokeCacao/mvdream-base-hf) for sd-v2.1 version. See [github repo](https://github.com/KokeCacao/mvdream-hf) for convertion code.
19
+
20
+ Note that the original paper presents the sd-v2.1 version. Images above are generated with sd-v2.1 version.
21
 
22
  ## Convert Original Weights to Diffusers
23
 
scripts/convert_mvdream_to_diffusers.py CHANGED
@@ -6,14 +6,8 @@ import sys
6
 
7
  sys.path.insert(0, '../')
8
 
9
- from transformers import (
10
- CLIPImageProcessor,
11
- CLIPVisionModelWithProjection,
12
- )
13
-
14
  from diffusers.models import (
15
  AutoencoderKL,
16
- UNet2DConditionModel,
17
  )
18
  from diffusers.schedulers import DDIMScheduler
19
  from diffusers.utils import logging
 
6
 
7
  sys.path.insert(0, '../')
8
 
 
 
 
 
 
9
  from diffusers.models import (
10
  AutoencoderKL,
 
11
  )
12
  from diffusers.schedulers import DDIMScheduler
13
  from diffusers.utils import logging