zhuguanyu commited on
Commit
68e97f9
1 Parent(s): 1f8d32d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -1,3 +1,82 @@
1
  ---
2
  license: creativeml-openrail-m
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: creativeml-openrail-m
3
+ tags:
4
+ - coreml
5
+ - stable-diffusion
6
+ - text-to-image
7
  ---
8
+ # Core ML Converted Model:
9
+
10
+ - This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
11
+ - Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
12
+ - `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
13
+ - `original` version is only compatible with CPU & GPU option.<br>
14
+ - Custom resolution versions are tagged accordingly.<br>
15
+ - `vae` tagged files have a vae embedded into the model.<br>
16
+ - Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br>
17
+ - This model was converted with `vae-encoder` for i2i.
18
+
19
+ # Mo Di Diffusio:
20
+ Source(s): [Hugging Face](https://huggingface.co/nitrosocke/mo-di-diffusion?text=Indonesian+kid%2C+hoodie%2C+barefoot%2C+modern+disney+style) <br>
21
+
22
+ **Mo Di Diffusion**
23
+
24
+ This is the fine-tuned Stable Diffusion 1.5 model trained on screenshots from a popular animation studio.
25
+ Use the tokens **_modern disney style_** in your prompts for the effect.
26
+
27
+ **If you enjoy my work, please consider supporting me**
28
+ [![Become A Patreon](https://badgen.net/badge/become/a%20patron/F96854)](https://patreon.com/user?u=79196446)
29
+
30
+ **Videogame Characters rendered with the model:**
31
+ ![Videogame Samples](https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/modi-samples-01s.jpg)
32
+ **Animal Characters rendered with the model:**
33
+ ![Animal Samples](https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/modi-samples-02s.jpg)
34
+ **Cars and Landscapes rendered with the model:**
35
+ ![Misc. Samples](https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/modi-samples-03s.jpg)
36
+ #### Prompt and settings for Lara Croft:
37
+ **modern disney lara croft**
38
+ _Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 3940025417, Size: 512x768_
39
+
40
+ #### Prompt and settings for the Lion:
41
+ **modern disney (baby lion) Negative prompt: person human**
42
+ _Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 1355059992, Size: 512x512_
43
+
44
+ This model was trained using the diffusers based dreambooth training by ShivamShrirao using prior-preservation loss and the _train-text-encoder_ flag in 9.000 steps.
45
+
46
+ ### 🧨 Diffusers
47
+
48
+ This model can be used just like any other Stable Diffusion model. For more information,
49
+ please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
50
+
51
+ You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
52
+
53
+ ```python
54
+ from diffusers import StableDiffusionPipeline
55
+ import torch
56
+
57
+ model_id = "nitrosocke/mo-di-diffusion"
58
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
59
+ pipe = pipe.to("cuda")
60
+
61
+ prompt = "a magical princess with golden hair, modern disney style"
62
+ image = pipe(prompt).images[0]
63
+
64
+ image.save("./magical_princess.png")
65
+ ```
66
+
67
+ # Gradio & Colab
68
+
69
+ We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run fine-tuned Stable Diffusion models:
70
+ [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/anzorq/finetuned_diffusion)
71
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1j5YvfMZoGdDGdj3O3xRU1m4ujKYsElZO?usp=sharing)
72
+
73
+ ## License
74
+
75
+ This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
76
+ The CreativeML OpenRAIL License specifies:
77
+
78
+ 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
79
+ 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
80
+ 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
81
+ [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
82
+