CogVideoX-5b-I2V / README.md
zR
readme
c5c783c
metadata
license: other
license_link: https://huggingface.co./THUDM/CogVideoX-5b-I2V/blob/main/LICENSE
language:
  - en
tags:
  - cogvideox
  - video-generation
  - thudm
  - image-to-video
inference: false

CogVideoX-5B-I2V

πŸ“„ Read in English | πŸ€— Huggingface Space | 🌐 Github | πŸ“œ arxiv

πŸ“ Visit Qingying and API Platform for the commercial version of the video generation model

Model Introduction

CogVideoX is an open-source video generation model originating from Qingying. The table below presents information related to the video generation models we offer in this version.

Model Name CogVideoX-2B CogVideoX-5B CogVideoX-5B-I2V (This Repository)
Model Description Entry-level model, balancing compatibility. Low cost for running and secondary development. Larger model with higher video generation quality and better visual effects. CogVideoX-5B image-to-video version.
Inference Precision FP16*(recommended), BF16, FP32, FP8*, INT8, not supported: INT4 BF16 (recommended), FP16, FP32, FP8*, INT8, not supported: INT4
Single GPU Memory Usage
SAT FP16: 18GB
diffusers FP16: from 4GB*
diffusers INT8 (torchao): from 3.6GB*
SAT BF16: 26GB
diffusers BF16: from 5GB*
diffusers INT8 (torchao): from 4.4GB*
Multi-GPU Inference Memory Usage FP16: 10GB* using diffusers
BF16: 15GB* using diffusers
Inference Speed
(Step = 50, FP/BF16)
Single A100: ~90 seconds
Single H100: ~45 seconds
Single A100: ~180 seconds
Single H100: ~90 seconds
Fine-tuning Precision FP16 BF16
Fine-tuning Memory Usage 47 GB (bs=1, LORA)
61 GB (bs=2, LORA)
62GB (bs=1, SFT)
63 GB (bs=1, LORA)
80 GB (bs=2, LORA)
75GB (bs=1, SFT)
78 GB (bs=1, LORA)
75GB (bs=1, SFT, 16GPU)
Prompt Language English*
Maximum Prompt Length 226 Tokens
Video Length 6 Seconds
Frame Rate 8 Frames / Second
Video Resolution 720 x 480, no support for other resolutions (including fine-tuning)
Position Embedding 3d_sincos_pos_embed 3d_rope_pos_embed 3d_rope_pos_embed + learnable_pos_embed

Data Explanation

  • While testing using the diffusers library, all optimizations included in the diffusers library were enabled. This scheme has not been tested for actual memory usage on devices outside of NVIDIA A100 / H100 architectures. Generally, this scheme can be adapted to all NVIDIA Ampere architecture and above devices. If optimizations are disabled, memory consumption will multiply, with peak memory usage being about 3 times the value in the table. However, speed will increase by about 3-4 times. You can selectively disable some optimizations, including:
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()
  • For multi-GPU inference, the enable_sequential_cpu_offload() optimization needs to be disabled.
  • Using INT8 models will slow down inference, which is done to accommodate lower-memory GPUs while maintaining minimal video quality loss, though inference speed will significantly decrease.
  • The CogVideoX-2B model was trained in FP16 precision, and all CogVideoX-5B models were trained in BF16 precision. We recommend using the precision in which the model was trained for inference.
  • PytorchAO and Optimum-quanto can be used to quantize the text encoder, transformer, and VAE modules to reduce the memory requirements of CogVideoX. This allows the model to run on free T4 Colabs or GPUs with smaller memory! Also, note that TorchAO quantization is fully compatible with torch.compile, which can significantly improve inference speed. FP8 precision must be used on devices with NVIDIA H100 and above, requiring source installation of torch, torchao, diffusers, and accelerate Python packages. CUDA 12.4 is recommended.
  • The inference speed tests also used the above memory optimization scheme. Without memory optimization, inference speed increases by about 10%. Only the diffusers version of the model supports quantization.
  • The model only supports English input; other languages can be translated into English for use via large model refinement.
  • The memory usage of model fine-tuning is tested in an 8 * H100 environment, and the program automatically uses Zero 2 optimization. If a specific number of GPUs is marked in the table, that number or more GPUs must be used for fine-tuning.

Reminders

  • Use SAT for inference and fine-tuning SAT version models. Feel free to visit our GitHub for more details.

Getting Started Quickly πŸ€—

This model supports deployment using the Hugging Face diffusers library. You can follow the steps below to get started.

We recommend that you visit our GitHub to check out prompt optimization and conversion to get a better experience.

  1. Install the required dependencies
# diffusers>=0.30.3
# transformers>=0.44.2
# accelerate>=0.34.0
# imageio-ffmpeg>=0.5.1
pip install --upgrade transformers accelerate diffusers imageio-ffmpeg 
  1. Run the code
import torch
from diffusers import CogVideoXImageToVideoPipeline
from diffusers.utils import export_to_video, load_image

prompt = "A little girl is riding a bicycle at high speed. Focused, detailed, realistic."
image = load_image(image="input.jpg")
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
    "THUDM/CogVideoX-5b-I2V",
    torch_dtype=torch.bfloat16
)

pipe.enable_sequential_cpu_offload()
pipe.vae.enable_tiling()
pipe.vae.enable_slicing()

video = pipe(
    prompt=prompt,
    image=image,
    num_videos_per_prompt=1,
    num_inference_steps=50,
    num_frames=49,
    guidance_scale=6,
    generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]

export_to_video(video, "output.mp4", fps=8)

Quantized Inference

PytorchAO and Optimum-quanto can be used to quantize the text encoder, transformer, and VAE modules to reduce CogVideoX's memory requirements. This allows the model to run on free T4 Colab or GPUs with lower VRAM! Also, note that TorchAO quantization is fully compatible with torch.compile, which can significantly accelerate inference.

# To get started, PytorchAO needs to be installed from the GitHub source and PyTorch Nightly.
# Source and nightly installation is only required until the next release.

import torch
from diffusers import AutoencoderKLCogVideoX, CogVideoXTransformer3DModel, CogVideoXImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
from transformers import T5EncoderModel
from torchao.quantization import quantize_, int8_weight_only

quantization = int8_weight_only

text_encoder = T5EncoderModel.from_pretrained("THUDM/CogVideoX-5b-I2V", subfolder="text_encoder", torch_dtype=torch.bfloat16)
quantize_(text_encoder, quantization())

transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX-5b-I2V",subfolder="transformer", torch_dtype=torch.bfloat16)
quantize_(transformer, quantization())

vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX-5b-I2V", subfolder="vae", torch_dtype=torch.bfloat16)
quantize_(vae, quantization())

# Create pipeline and run inference
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
    "THUDM/CogVideoX-5b-I2V",
    text_encoder=text_encoder,
    transformer=transformer,
    vae=vae,
    torch_dtype=torch.bfloat16,
)

pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()
pipe.vae.enable_slicing()

prompt = "A little girl is riding a bicycle at high speed. Focused, detailed, realistic."
image = load_image(image="input.jpg")
video = pipe(
    prompt=prompt,
    image=image,
    num_videos_per_prompt=1,
    num_inference_steps=50,
    num_frames=49,
    guidance_scale=6,
    generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]

export_to_video(video, "output.mp4", fps=8)

Additionally, these models can be serialized and stored using PytorchAO in quantized data types to save disk space. You can find examples and benchmarks at the following links:

Further Exploration

Feel free to enter our GitHub, where you'll find:

  1. More detailed technical explanations and code.
  2. Optimized prompt examples and conversions.
  3. Detailed code for model inference and fine-tuning.
  4. Project update logs and more interactive opportunities.
  5. CogVideoX toolchain to help you better use the model.
  6. INT8 model inference code.

Model License

This model is released under the CogVideoX LICENSE.

Citation

@article{yang2024cogvideox,
  title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
  author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
  journal={arXiv preprint arXiv:2408.06072},
  year={2024}
}