Change README to show how to use it with diffusers
#4
by
patrickvonplaten
- opened
README.md
CHANGED
@@ -13,9 +13,60 @@ This model is NOT the 19.2M images Characters Model on TrinArt, but an improved
|
|
13 |
|
14 |
このモデルはTrinArtのキャラクターズモデル(1920万枚再学習モデル)ではありません! とりんさまAIボットのモデルの改良版です。このモデルはオリジナルのSD v1.4モデルのアートスタイルをできる限り残したまま、アニメ・マンガ方向に調整することを意図しています。
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
|
21 |
## Stable Diffusion TrinArt/Trin-sama AI finetune v2
|
|
|
13 |
|
14 |
このモデルはTrinArtのキャラクターズモデル(1920万枚再学習モデル)ではありません! とりんさまAIボットのモデルの改良版です。このモデルはオリジナルのSD v1.4モデルのアートスタイルをできる限り残したまま、アニメ・マンガ方向に調整することを意図しています。
|
15 |
|
16 |
+
## Diffusers
|
17 |
|
18 |
+
The model has been ported to `diffusers` by [ayan4m1](https://huggingface.co/ayan4m1] (thanks!)
|
19 |
+
and can easily be run from one of the branches:
|
20 |
+
- `revision="diffusers-60k"` for the checkpoint trained on 60,000 steps,
|
21 |
+
- `revision="diffusers-95k"` for the checkpoint trained on 95,000 steps,
|
22 |
+
- `revision="diffusers-115k"` for the checkpoint trained on 115,000 steps.
|
23 |
+
|
24 |
+
For more information, please have a look at [the "Three flavors" section](#three-flavors).
|
25 |
+
|
26 |
+
### Example Text2Image
|
27 |
+
|
28 |
+
```python
|
29 |
+
# !pip install diffusers==0.3.0
|
30 |
+
from diffusers import StableDiffusionPipeline
|
31 |
+
|
32 |
+
# using the 60,000 steps checkpoint
|
33 |
+
pipe = StableDiffusionPipeline.from_pretrained("naclbit/trinart_stable_diffusion_v2", revision="diffusers-60k")
|
34 |
+
pipe.to("cuda")
|
35 |
+
|
36 |
+
image = pipe("A magical dragon flying in front of the Himalaya in manga style").images[0]
|
37 |
+
image
|
38 |
+
```
|
39 |
+
|
40 |
+
![dragon](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/a_magical_dragon_himalaya.png)
|
41 |
+
|
42 |
+
If you want to run the pipeline faster or on a different hardware, please have a look at the [optimization docs](https://huggingface.co/docs/diffusers/optimization/fp16).
|
43 |
+
|
44 |
+
### Example Image2Image
|
45 |
+
|
46 |
+
```python
|
47 |
+
# !pip install diffusers==0.3.0
|
48 |
+
from diffusers import StableDiffusionImg2ImgPipeline
|
49 |
+
import requests
|
50 |
+
from PIL import Image
|
51 |
+
from io import BytesIO
|
52 |
+
|
53 |
+
url = "https://scitechdaily.com/images/Dog-Park.jpg"
|
54 |
+
|
55 |
+
response = requests.get(url)
|
56 |
+
init_image = Image.open(BytesIO(response.content)).convert("RGB")
|
57 |
+
init_image = init_image.resize((768, 512))
|
58 |
+
|
59 |
+
# using the 115,000 steps checkpoint
|
60 |
+
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("naclbit/trinart_stable_diffusion_v2", revision="diffusers-115k")
|
61 |
+
pipe.to("cuda")
|
62 |
+
|
63 |
+
images = pipe(prompt="Manga drawing of Brad Pitt", init_image=init_image, strength=0.75, guidance_scale=7.5).images
|
64 |
+
image
|
65 |
+
```
|
66 |
+
|
67 |
+
![brad_pitt](https://huggingface.co/datasets/patrickvonplaten/images/blob/main/manga_man.png)
|
68 |
+
|
69 |
+
If you want to run the pipeline faster or on a different hardware, please have a look at the [optimization docs](https://huggingface.co/docs/diffusers/optimization/fp16).
|
70 |
|
71 |
|
72 |
## Stable Diffusion TrinArt/Trin-sama AI finetune v2
|