Add diffusers to the model card
#2
by
multimodalart
HF staff
- opened
README.md
CHANGED
@@ -31,6 +31,36 @@ The FLUX.1 models are also available via API from the following sources
|
|
31 |
## ComfyUI
|
32 |
`FLUX.1 [schnell]` is also available in [Comfy UI](https://github.com/comfyanonymous/ComfyUI) for local inference with a node-based workflow.
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
---
|
35 |
# Limitations
|
36 |
- This model is not intended or able to provide factual information.
|
|
|
31 |
## ComfyUI
|
32 |
`FLUX.1 [schnell]` is also available in [Comfy UI](https://github.com/comfyanonymous/ComfyUI) for local inference with a node-based workflow.
|
33 |
|
34 |
+
## Diffusers
|
35 |
+
To use `FLUX.1 [schnell]` with the 🧨 diffusers python library, first install or upgrade diffusers
|
36 |
+
|
37 |
+
```shell
|
38 |
+
pip install git+https://github.com/huggingface/diffusers.git
|
39 |
+
```
|
40 |
+
|
41 |
+
Then you can use `FluxPipeline` to run the model
|
42 |
+
|
43 |
+
```python
|
44 |
+
import torch
|
45 |
+
from diffusers import FluxPipeline
|
46 |
+
|
47 |
+
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
|
48 |
+
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
|
49 |
+
|
50 |
+
prompt = "A cat holding a sign that says hello world"
|
51 |
+
image = pipe(
|
52 |
+
prompt,
|
53 |
+
guidance_scale=0.0,
|
54 |
+
output_type="pil",
|
55 |
+
num_inference_steps=4,
|
56 |
+
max_sequence_length=256,
|
57 |
+
generator=torch.Generator("cpu").manual_seed(0)
|
58 |
+
).images[0]
|
59 |
+
image.save("flux-schnell.png")
|
60 |
+
```
|
61 |
+
|
62 |
+
To learn more check out the [diffusers](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) documentation
|
63 |
+
|
64 |
---
|
65 |
# Limitations
|
66 |
- This model is not intended or able to provide factual information.
|