Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ tags:
|
|
12 |
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
|
13 |
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
|
14 |
- `original` version is only compatible with `CPU & GPU` option
|
15 |
-
- `
|
16 |
- Resolution and bit size are as noted in the individual file names.
|
17 |
- This model requires macOS 14.0 or later to run properly.
|
18 |
- This model was converted with a `vae-encoder` for use with `image2image`.
|
|
|
12 |
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
|
13 |
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
|
14 |
- `original` version is only compatible with `CPU & GPU` option
|
15 |
+
- `split_einsum` version takes **about 5-10 minutes** to load the model for the first time and is available for both `CPU & Neural Engine` and `CPU & GPU` options. If your Mac has a lot of GPUs, using the CPU & GPU option will speed up image generation.
|
16 |
- Resolution and bit size are as noted in the individual file names.
|
17 |
- This model requires macOS 14.0 or later to run properly.
|
18 |
- This model was converted with a `vae-encoder` for use with `image2image`.
|