XenoEngine-SD-webui / README.md
Xenos14's picture
Update README.md (#1)
56b2645
|
raw
history blame
No virus
5.15 kB
metadata
title: XenoEngine SD Webui
emoji: 👀
colorFrom: indigo
colorTo: purple
sdk: docker
app_port: 7860
pinned: true
tags:
  - stable-diffusion
  - stable-diffusion-diffusers
  - text-to-image
duplicated_from: carloscar/stable-diffusion-webui-controlnet-docker

This is a modified version of the Space here: carloscar/stable-diffusion-webui-controlnet-docker

While it should still work fine with GPU upgrades (though untested), it has been optimized and set up with usage for a "no GPU" environment. I've removed a few extensions which won't work without a GPU and added a few more that help either performance or output quality in an environment without a GPU.

Setup on Hugging Face

  1. Duplicate this space to your Hugging Face account or clone this repo to your account.
  2. The on_start.sh file will be run when the container is started, right before the WebUI is initiated. This is where you can install any additional extensions or models you may need. Make sure the env value IS_SHARED_UI is set to 0 or is unset for your space, or else only the lightweight model installation will run and some features will be disabled.

Relevant links for more information

Repo for this builder

This repo, containing the Dockerfile, etc. for building the image can originally be found on both 🤗 Hugging Face ➔ carloscar/stable-diffusion-webui-controlnet-docker and 🐙 GitHub ➔ kalaspuff/stable-diffusion-webui-controlnet-docker.

Stable Diffusion Web UI

WebUI extension for ControlNet

ControlNet models

Licenses for using Stable Diffusion models and ControlNet models

Enable additional models (checkpoints, LoRA, VAE, etc.)

Enable the models you want to use on the bottom of the on_start.sh file. This is also the place to add any additional models you may want to install when starting your space.

## Checkpoint · Example:
download-model --checkpoint "FILENAME" "URL"

## LORA (low-rank adaptation) · Example:
download-model --lora "FILENAME" "URL"

## VAE (variational autoencoder) · Example:
download-model --vae "FILENAME" "URL"

Some examples of additional (optional) models

Some models such as additional checkpoints, VAE, LoRA, etc. may already be present in the on_start.sh file. You can enable them by removing the # in front of their respective line or disable them by removing the line or adding a leading # before download-model.

Visit the individual model pages for more information on the models and their licenses.

Additional acknowledgements

A lot of inspiration for this Docker build comes from GitHub ➔ camenduru. Amazing things! 🙏