--- license: creativeml-openrail-m base_model: "stabilityai/stable-diffusion-xl-base-1.0" tags: - sdxl - sdxl-diffusers - text-to-image - diffusers - simpletuner - not-for-all-audiences - lora - template:sd-lora - lycoris inference: true widget: - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_0_0.png - text: 'ggn_style painting of a hipster making a chair' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_1_0.png - text: 'ggn_style painting of a hamster' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_2_0.png - text: 'in the style of ggn_style, A painting of a woman stands near the water holding an object. Another woman swims in the water. A tree with twisted branches is at the foreground left. Flowers and vegetation are near the lower center. Hills with vegetation are in the background. Text ''Parau na te Varua ino'' at the bottom left and artist''s signature at the lower right.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_3_0.png - text: 'ggn_style, A seated woman with long dark hair is depicted in a front-facing view. She is wearing a dress with a white collar and appears to be in her thirties. Her hands are on her lap. Green leaves and flowers surround her.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_4_0.png - text: 'ggm_style, tropical fruits and flowers, bold outlines, non-naturalistic colors, decorative composition' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_5_0.png - text: 'DaVinciXL, One mechanical device with gears and levers, no human subjects, one item in the image.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_6_0.png --- # davinci-sdxl-lora-05 This is a LyCORIS adapter derived from [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co./stabilityai/stable-diffusion-xl-base-1.0). The main validation prompt used during training was: ``` DaVinciXL, One mechanical device with gears and levers, no human subjects, one item in the image. ``` ## Validation settings - CFG: `4.2` - CFG Rescale: `0.0` - Steps: `30` - Sampler: `None` - Seed: `42` - Resolution: `1024x1024` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 0 - Training steps: 200 - Learning rate: 8e-05 - Effective batch size: 16 - Micro-batch size: 8 - Gradient accumulation steps: 2 - Number of GPUs: 1 - Prediction type: epsilon - Rescaled betas zero SNR: False - Optimizer: optimi-stableadamwweight_decay=1e-3 - Precision: Pure BF16 - Quantised: Yes: int8-quanto - Xformers: Not used - LyCORIS Config: ```json { "algo": "lokr", "multiplier": 1.0, "linear_dim": 10000, "linear_alpha": 1, "factor": 16, "apply_preset": { "target_module": [ "Attention", "FeedForward" ], "module_algo_map": { "Attention": { "factor": 16 }, "FeedForward": { "factor": 8 } } } } ``` ## Datasets ### davinci-sdxl-512 - Repeats: 10 - Total number of images: 50 - Total number of aspect buckets: 8 - Resolution: 0.262144 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### davinci-sdxl-1024 - Repeats: 10 - Total number of images: 50 - Total number of aspect buckets: 16 - Resolution: 1.048576 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### davinci-sdxl-512-crop - Repeats: 10 - Total number of images: 50 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### davinci-sdxl-1024-crop - Repeats: 10 - Total number of images: 50 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: True - Crop style: random - Crop aspect: square ## Inference ```python import torch from diffusers import DiffusionPipeline from lycoris import create_lycoris_from_weights model_id = 'stabilityai/stable-diffusion-xl-base-1.0' adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually lora_scale = 1.0 wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer) wrapper.merge_to() prompt = "DaVinciXL, One mechanical device with gears and levers, no human subjects, one item in the image." negative_prompt = 'blurry, cropped, ugly' pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=30, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1024, height=1024, guidance_scale=4.2, guidance_rescale=0.0, ).images[0] image.save("output.png", format="PNG") ```