PeterL1n commited on
Commit
22daf88
1 Parent(s): e3a0719

Update readme

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -10,12 +10,12 @@ inference: false
10
  # AnimateDiff-Lightning
11
 
12
  <video src='https://huggingface.co/ByteDance/AnimateDiff-Lightning/resolve/main/animatediff_lightning_samples_t2v.mp4' width="100%" autoplay muted loop></video>
 
13
 
14
  AnimateDiff-Lightning is a lightning-fast text-to-video generation model. It can generate 16-frame 512px videos in a few steps. For more information, please refer to our research paper: [AnimateDiff-Lightning: Cross-Model Diffusion Distillation](https://huggingface.co/ByteDance/AnimateDiff-Lightning/resolve/main/animatediff_lightning_report.pdf). We release the model as part of the research.
15
 
16
  Our models are distilled from [AnimateDiff SD1.5 v2](https://huggingface.co/guoyww/animatediff). This repository contains checkpoints for 1-step, 2-step, 4-step, and 8-step distilled models. The generation quality of our 2-step, 4-step, and 8-step model is great. Our 1-step model is only provided for research purposes.
17
 
18
-
19
  ## Recommendation
20
 
21
  AnimateDiff-Lightning produces the best results when used with stylized base models. We recommend using the following base models:
@@ -78,8 +78,6 @@ export_to_gif(output.frames[0], "animation.gif")
78
 
79
  ## Video-to-Video Generation
80
 
81
- <video src='https://huggingface.co/ByteDance/AnimateDiff-Lightning/resolve/main/animatediff_lightning_samples_v2v.mp4' width="100%" autoplay muted loop></video>
82
-
83
  AnimateDiff-Lightning is great for video-to-video generation. We provide the simplist comfyui workflow using ControlNet.
84
 
85
  1. Download [animatediff_lightning_v2v_openpose_workflow.json](https://huggingface.co/ByteDance/AnimateDiff-Lightning/raw/main/comfyui/animatediff_lightning_v2v_openpose_workflow.json) and import it in ComfyUI.
 
10
  # AnimateDiff-Lightning
11
 
12
  <video src='https://huggingface.co/ByteDance/AnimateDiff-Lightning/resolve/main/animatediff_lightning_samples_t2v.mp4' width="100%" autoplay muted loop></video>
13
+ <video src='https://huggingface.co/ByteDance/AnimateDiff-Lightning/resolve/main/animatediff_lightning_samples_v2v.mp4' width="100%" autoplay muted loop></video>
14
 
15
  AnimateDiff-Lightning is a lightning-fast text-to-video generation model. It can generate 16-frame 512px videos in a few steps. For more information, please refer to our research paper: [AnimateDiff-Lightning: Cross-Model Diffusion Distillation](https://huggingface.co/ByteDance/AnimateDiff-Lightning/resolve/main/animatediff_lightning_report.pdf). We release the model as part of the research.
16
 
17
  Our models are distilled from [AnimateDiff SD1.5 v2](https://huggingface.co/guoyww/animatediff). This repository contains checkpoints for 1-step, 2-step, 4-step, and 8-step distilled models. The generation quality of our 2-step, 4-step, and 8-step model is great. Our 1-step model is only provided for research purposes.
18
 
 
19
  ## Recommendation
20
 
21
  AnimateDiff-Lightning produces the best results when used with stylized base models. We recommend using the following base models:
 
78
 
79
  ## Video-to-Video Generation
80
 
 
 
81
  AnimateDiff-Lightning is great for video-to-video generation. We provide the simplist comfyui workflow using ControlNet.
82
 
83
  1. Download [animatediff_lightning_v2v_openpose_workflow.json](https://huggingface.co/ByteDance/AnimateDiff-Lightning/raw/main/comfyui/animatediff_lightning_v2v_openpose_workflow.json) and import it in ComfyUI.