--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: inference: false --- # sdxl-botw LoRA by Julian BILCKE (HF: [jbilcke-hf](https://huggingface.co./jbilcke-hf), Replicate: [jbilcke](https://replicate.com/jbilcke)) ### A SDXL LoRA inspired by Breath of the Wild ![lora_image](https://tjzk.replicate.delivery/models_models_cover_image/aea9c0c4-b3d6-425b-9e96-9a615220fa30/link-llama.jpeg) > ## Inference with Replicate API Grab your replicate token [here](https://replicate.com/account) ```bash pip install replicate export REPLICATE_API_TOKEN=r8_************************************* ``` ```py import replicate output = replicate.run( "sdxl-botw@sha256:bf412da351d41547f117391eff2824ab0301b6ba1c6c010c4b5f766a492d62fc", input={"prompt": "Link riding a llama, in the style of TOK"} ) print(output) ``` You may also do inference via the API with Node.js or curl, and locally with COG and Docker, [check out the Replicate API page for this model](https://replicate.com/jbilcke/sdxl-botw/api) ## Inference with 🧨 diffusers Replicate SDXL LoRAs are trained with Pivotal Tuning, which combines training a concept via Dreambooth LoRA with training a new token with Textual Inversion. As `diffusers` doesn't yet support textual inversion for SDXL, we will use cog-sdxl `TokenEmbeddingsHandler` class. The trigger tokens for your prompt will be `` ```shell pip install diffusers transformers accelerate safetensors huggingface_hub git clone https://github.com/replicate/cog-sdxl cog_sdxl ``` ```py import torch from huggingface_hub import hf_hub_download from diffusers import DiffusionPipeline from cog_sdxl.dataset_and_utils import TokenEmbeddingsHandler from diffusers.models import AutoencoderKL pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", ).to("cuda") load_lora_weights("jbilcke-hf/sdxl-botw", weight_name="lora.safetensors") text_encoders = [pipe.text_encoder, pipe.text_encoder_2] tokenizers = [pipe.tokenizer, pipe.tokenizer_2] embedding_path = hf_hub_download(repo_id="jbilcke-hf/sdxl-botw", filename="embeddings.pti", repo_type="model") embhandler = TokenEmbeddingsHandler(text_encoders, tokenizers) embhandler.load_embeddings(embedding_path) prompt="Link riding a llama, in the style of " images = pipe( prompt, cross_attention_kwargs={"scale": 0.8}, ).images #your output image images[0] ```