What is the Image variable in the codes?

#32
by assa8945 - opened

I am running the demo codes for v2_XL and encountered the problem.

pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()

video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames]

video_frames = pipe(prompt, video=video, strength=0.6).frames
video_path = export_to_video(video_frames, output_video_path="/home/patrick/videos/video_1024_darth_vader_36.mp4")

{
    "name": "NameError",
    "message": "name 'Image' is not defined",
    "stack": "---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[7], line 6
      3 pipe.enable_model_cpu_offload()
      4 pipe.enable_vae_slicing()
----> 6 video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames]
      8 video_frames = pipe(prompt, video=video, strength=0.6).frames
      9 video_path = export_to_video(video_frames, output_video_path=\"/home/patrick/videos/video_1024_darth_vader_36.mp4\")

NameError: name 'Image' is not defined"
}

You can try something like this, good luck.

PYTHON SCRIPT ONE:

import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)

pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()
pipe.unet.enable_forward_chunking(chunk_size=1, dim=1)

prompt = "Darth Vader is surfing on waves"

video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=36).frames[0]

video_path = export_to_video(video_frames, output_video_path="low_res_video.mp4")

print(f"Low-resolution video saved at: {video_path}")

import pickle
with open('video_frames.pkl', 'wb') as f:
pickle.dump(video_frames, f)

PYTHON SCRIPT TWO:

import torch
from PIL import Image
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import export_to_video
import pickle
import numpy as np

with open('video_frames.pkl', 'rb') as f:
video_frames = pickle.load(f)

pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)

pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()

video = []
for frame in video_frames:
frame = np.squeeze(frame)

if frame.dtype != np.uint8: 
    frame = (255 * frame).astype(np.uint8)

resized_frame = Image.fromarray(frame).resize((1024, 576))
video.append(resized_frame)

prompt = "Darth Vader is surfing on waves"

video_frames_upscaled = pipe(prompt, video=video, strength=0.6).frames[0]

video_path_upscaled = export_to_video(video_frames_upscaled, output_video_path="upscaled_video.mp4")

print(f"Upscaled video saved at: {video_path_upscaled}")

you need a video card with 16gb vram minimum for the script two with this code and model.

@Evados Thank you for providing the codes, it kind of worked but it did not.
I got the following 0-second video generated. Is it possibly because I copied your codes with some spacing errors in the loops?

image.png

image.png

Sorry for the delay to reply..
Here you can download my script files and I have add one more for I2VGenXL model too.
If you like to gain a bit more speed with I2VGen model disable this line : pipeline.unet.enable_forward_chunking(chunk_size=1, dim=1)
https://www.mediafire.com/file/ml7q43vs0lftuka/DG_ModelScopeT2V.zip

Sign up or log in to comment