Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,23 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: Wan-AI/Wan2.1-T2V-1.3B
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
- zh
|
7 |
+
pipeline_tag: text-to-video
|
8 |
+
tags:
|
9 |
+
- video
|
10 |
+
- video-generation
|
11 |
+
- bitsandbytes
|
12 |
+
- quantization
|
13 |
+
- nf4
|
14 |
+
library_name: diffusers
|
15 |
+
---
|
16 |
+
Attempt to run Wan2.1-T2V-1.3B with lower VRAM
|
17 |
+
|
18 |
+
Changes made:
|
19 |
+
- **Diffusion Model:** Changed all Linear layers from float32 to nf4 reducing model size from around 6GB to 1GB (approx)
|
20 |
+
- **VAE:** No Linear layers so nothing to quantize here
|
21 |
+
- **T5-UMT Encoder:** Pretty big model so having difficulty loading it in my poor 4060 (8GB VRAM) but this is the one which takes the most VRAM. If this can be quantized it can be very easy to run this.
|
22 |
+
|
23 |
+
Will add T5 Encoder model later if I can get it working
|