AlekseyCalvin
commited on
Commit
•
9ad1d59
1
Parent(s):
085002f
Update README.md
Browse files
README.md
CHANGED
@@ -142,11 +142,11 @@ Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
|
|
142 |
|
143 |
## Trigger words
|
144 |
|
145 |
-
'HST style autochrome photo'
|
146 |
|
147 |
## Config Parameters
|
148 |
*Dim:16 Alpha:32 Optimizer:Ademamix8bit LR:2e-4* **More info below!** <br>
|
149 |
-
Fine-tuned using the **Google Colab Notebook***
|
150 |
I've used A100 via Colab Pro.
|
151 |
However, training SD3.5 may potentially work with Free Colab or lower VRAM in general:<br>
|
152 |
Especially if one were to use:<br> ...Say, *lower rank (try 4 or 8), dataset size (in terms of caching/bucketing/pre-loading impacts), 1 batch size, Adamw8bit optimizer, 512 resolution, maybe adding the /lowvram, true/ argument, and plausibly specifying alternate quantization variants.* <br>
|
|
|
142 |
|
143 |
## Trigger words
|
144 |
|
145 |
+
**'HST style autochrome photo'**
|
146 |
|
147 |
## Config Parameters
|
148 |
*Dim:16 Alpha:32 Optimizer:Ademamix8bit LR:2e-4* **More info below!** <br>
|
149 |
+
Fine-tuned using the **Google Colab Notebook*** of **ai-toolkit**.<br>
|
150 |
I've used A100 via Colab Pro.
|
151 |
However, training SD3.5 may potentially work with Free Colab or lower VRAM in general:<br>
|
152 |
Especially if one were to use:<br> ...Say, *lower rank (try 4 or 8), dataset size (in terms of caching/bucketing/pre-loading impacts), 1 batch size, Adamw8bit optimizer, 512 resolution, maybe adding the /lowvram, true/ argument, and plausibly specifying alternate quantization variants.* <br>
|