--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: a fisherman nearby river, Chinese line art parameters: negative_prompt: (lowres, low quality, worst quality) output: url: images/0640244a27a6955bdc2740ef1bacafaf716d194fb77c5346264d91da.jpg - text: a woman, Chinese line art parameters: negative_prompt: (lowres, low quality, worst quality) output: url: images/f1984bbc23957d65e0bd86273f7e8b1c22b53e2cd51ab4fa83680c87.jpg - text: Beijing City, Chinese line art parameters: negative_prompt: (lowres, low quality, worst quality) output: url: images/756607bc025fe25935c39225bf18f3c98d24aa5878541533a9ca3424.jpg base_model: stabilityai/stable-diffusion-3.5-large instance_prompt: Chinese line art license: other license_name: stabilityai-ai-community license_link: >- https://huggingface.co./stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md --- # SD3.5-LoRA-Chinese-Line-Art ## Trigger words You should use `Chinese line art` to trigger the image generation. ## Inference ```python import torch from diffusers import StableDiffusion3Pipeline # please install diffusers from the source pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16) pipe.load_lora_weights("Shakker-Labs/SD3.5-LoRA-Chinese-Line-Art", weight_name="SD35-lora-Chinese-Line-Art.safetensors") pipe.fuse_lora(lora_scale=1.0) pipe.to("cuda") prompt = "a boat on the river, mountain in the distance, Chinese line art" negative_prompt = "(lowres, low quality, worst quality)" image = pipe(prompt=prompt, negative_prompt=negative_prompt num_inference_steps=24, guidance_scale=4.0, width=960, height=1280, ).images[0] image.save(f"toy_example.jpg") ```