SD3.5-LoRA-Futuristic-Bzonze-Colored
![](https://huggingface.co./Shakker-Labs/SD3.5-LoRA-Futuristic-Bzonze-Colored/resolve/main/images/b8b98770d257ab5b8fdeee37bcf61e85c562b45c5bb79f0c2708361b.jpg)
- Prompt
- a woman, Futuristic bzonze-colored
- Negative Prompt
- (lowres, low quality, worst quality)
![](https://huggingface.co./Shakker-Labs/SD3.5-LoRA-Futuristic-Bzonze-Colored/resolve/main/images/6371e4e34450732c155aa1205f0502dd7e9839ac61a6ac8a460c0282.jpg)
- Prompt
- a cup, Futuristic bzonze-colored
- Negative Prompt
- (lowres, low quality, worst quality)
![](https://huggingface.co./Shakker-Labs/SD3.5-LoRA-Futuristic-Bzonze-Colored/resolve/main/images/fefeaac1e88b5883abdf0bc0403cf7c592104729148cc93ffe838b26.jpg)
- Prompt
- a lion, Futuristic bzonze-colored
- Negative Prompt
- (lowres, low quality, worst quality)
Trigger words
You should use Futuristic bzonze-colored
to trigger the image generation.
Inference
import torch
from diffusers import StableDiffusion3Pipeline # pip install diffusers>=0.31.0
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16)
pipe.load_lora_weights("Shakker-Labs/SD3.5-LoRA-Futuristic-Bzonze-Colored", weight_name="SD35-lora-Futuristic-Bzonze-Colored.safetensors")
pipe.fuse_lora(lora_scale=1.0)
pipe.to("cuda")
prompt = "a cup, Futuristic bzonze-colored"
negative_prompt = "(lowres, low quality, worst quality)"
image = pipe(prompt=prompt,
negative_prompt=negative_prompt
num_inference_steps=24,
guidance_scale=4.0,
width=960, height=1280,
).images[0]
image.save(f"toy_example.jpg")
- Downloads last month
- 680
Model tree for Shakker-Labs/SD3.5-LoRA-Futuristic-Bzonze-Colored
Base model
stabilityai/stable-diffusion-3.5-large