Files changed (1) hide show
  1. README.md +55 -0
README.md CHANGED
@@ -16,6 +16,61 @@ zeroscope_v2_XL uses 15.3gb of vram when rendering 30 frames at 1024x576
16
  2. Replace the respective files in the 'stable-diffusion-webui\models\ModelScope\t2v' directory.
17
  ### Upscaling recommendations
18
  For upscaling, it's recommended to use the 1111 extension. It works best at 1024x576 with a denoise strength between 0.66 and 0.85. Remember to use the same prompt that was used to generate the original clip.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ### Known issues
20
  Rendering at lower resolutions or fewer than 24 frames could lead to suboptimal outputs. <br />
21
 
 
16
  2. Replace the respective files in the 'stable-diffusion-webui\models\ModelScope\t2v' directory.
17
  ### Upscaling recommendations
18
  For upscaling, it's recommended to use the 1111 extension. It works best at 1024x576 with a denoise strength between 0.66 and 0.85. Remember to use the same prompt that was used to generate the original clip.
19
+
20
+ ### Usage in 🧨 Diffusers
21
+
22
+ Let's first install the libraries required:
23
+
24
+ ```bash
25
+ $ pip install git+https://github.com/huggingface/diffusers.git
26
+ $ pip install transformers accelerate torch
27
+ ```
28
+
29
+ Now, let's first generate a low resolution video using [cerspense/zeroscope_v2_576w](https://huggingface.co/cerspense/zeroscope_v2_576w).
30
+
31
+ ```py
32
+ import torch
33
+ from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
34
+ from diffusers.utils import export_to_video
35
+
36
+ pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16)
37
+ pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
38
+ pipe.enable_model_cpu_offload()
39
+ pipe.enable_vae_slicing()
40
+
41
+ prompt = "Darth Vader is surfing on waves"
42
+ video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=36).frames
43
+ video_path = export_to_video(video_frames)
44
+ ```
45
+
46
+ Next, we can upscale it using [cerspense/zeroscope_v2_XL](https://huggingface.co/cerspense/zeroscope_v2_XL).
47
+
48
+ ```py
49
+ pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16)
50
+ pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
51
+ pipe.enable_model_cpu_offload()
52
+ pipe.enable_vae_slicing()
53
+
54
+ video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames]
55
+
56
+ video_frames = pipe(prompt, video=video, strength=0.6).frames
57
+ video_path = export_to_video(video_frames, output_video_path="/home/patrick/videos/video_1024_darth_vader_36.mp4")
58
+ ```
59
+
60
+ Here are some results:
61
+
62
+ <table>
63
+ <tr>
64
+ Darth vader is surfing on waves.
65
+ <br>
66
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/darth_vader_36_1024.gif"
67
+ alt="Darth vader surfing in waves."
68
+ style="width: 576;" />
69
+ </center></td>
70
+ </tr>
71
+ </table>
72
+
73
+
74
  ### Known issues
75
  Rendering at lower resolutions or fewer than 24 frames could lead to suboptimal outputs. <br />
76