File size: 1,853 Bytes
30d1048 6598483 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: a fisherman nearby river, Chinese line art
parameters:
negative_prompt: (lowres, low quality, worst quality)
output:
url: images/0640244a27a6955bdc2740ef1bacafaf716d194fb77c5346264d91da.jpg
- text: a woman, Chinese line art
parameters:
negative_prompt: (lowres, low quality, worst quality)
output:
url: images/f1984bbc23957d65e0bd86273f7e8b1c22b53e2cd51ab4fa83680c87.jpg
- text: Beijing City, Chinese line art
parameters:
negative_prompt: (lowres, low quality, worst quality)
output:
url: images/756607bc025fe25935c39225bf18f3c98d24aa5878541533a9ca3424.jpg
base_model: stabilityai/stable-diffusion-3-medium
instance_prompt: Chinese line art
license: other
license_name: stabilityai-ai-community
license_link: >-
https://huggingface.co./stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md
---
# SD3.5-LoRA-Chinese-Line-Art
<Gallery />
## Trigger words
You should use `Chinese line art` to trigger the image generation.
## Inference
```python
import torch
from diffusers import StableDiffusion3Pipeline # please install diffusers from the source
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large-diffusers", torch_dtype=torch.bfloat16)
pipe.load_lora_weights("Shakker-Labs/SD3.5-LoRA-Chinese-Line-Art", weight_name="SD35-lora-Chinese-Line-Art.safetensors")
pipe.fuse_lora(lora_scale=1.0)
pipe.to("cuda")
prompt = "a boat on the river, mountain in the distance, Chinese line art"
negative_prompt = "(lowres, low quality, worst quality)"
image = pipe(prompt=prompt,
negative_prompt=negative_prompt
num_inference_steps=24,
guidance_scale=4.0,
width=960, height=1280,
).images[0]
image.save(f"toy_example.jpg")
```
|