Update README.md
Browse files
README.md
CHANGED
@@ -13,23 +13,28 @@ pipeline_tag: text-to-video
|
|
13 |
---
|
14 |
<div align="center">
|
15 |
|
16 |
-
|
17 |
|
18 |
<h2><center>Tora: Trajectory-oriented Diffusion Transformer for Video Generation</h2>
|
19 |
|
20 |
Zhenghao Zhang\*, Junchao Liao\*, Menghao Li, Zuozhuo Dai, Bingxue Qiu, Siyu Zhu, Long Qin, Weizhi Wang
|
21 |
|
22 |
\* equal contribution
|
|
|
23 |
|
24 |
<a href='https://arxiv.org/abs/2407.21705'><img src='https://img.shields.io/badge/ArXiv-2407.21705-red'></a>
|
25 |
<a href='https://ali-videoai.github.io/tora_video/'><img src='https://img.shields.io/badge/Project-Page-Blue'></a>
|
26 |
<a href="https://github.com/alibaba/Tora"><img src='https://img.shields.io/badge/Github-Link-orange'></a>
|
27 |
<a href='https://www.modelscope.cn/studios/xiaoche/Tora'><img src='https://img.shields.io/badge/🤖_ModelScope-ZH_demo-%23654dfc'></a>
|
28 |
<a href='https://www.modelscope.cn/studios/Alibaba_Research_Intelligence_Computing/Tora_En'><img src='https://img.shields.io/badge/🤖_ModelScope-EN_demo-%23654dfc'></a>
|
|
|
29 |
|
30 |
-
<a href='https://modelscope.cn/models/xiaoche/Tora'><img src='https://img.shields.io/badge/🤖_ModelScope-
|
31 |
-
<a href='https://
|
|
|
32 |
|
|
|
|
|
33 |
</div>
|
34 |
|
35 |
## Please visit our [Github repo](https://github.com/alibaba/Tora) for more details.
|
@@ -40,6 +45,8 @@ Recent advancements in Diffusion Transformer (DiT) have demonstrated remarkable
|
|
40 |
|
41 |
## 📣 Updates
|
42 |
|
|
|
|
|
43 |
- `2024/12/09` 🔥🔥Diffusers version of Tora and the corresponding model weights are released. Inference VRAM requirements are reduced to around 5 GiB. Please refer to [this](diffusers-version/README.md) for details.
|
44 |
- `2024/11/25` 🔥Text-to-Video training code released.
|
45 |
- `2024/10/31` Model weights uploaded to [HuggingFace](https://huggingface.co/Le0jc/Tora). We also provided an English demo on [ModelScope](https://www.modelscope.cn/studios/Alibaba_Research_Intelligence_Computing/Tora_En).
|
|
|
13 |
---
|
14 |
<div align="center">
|
15 |
|
16 |
+
<img src="icon.jpg" width="250"/>
|
17 |
|
18 |
<h2><center>Tora: Trajectory-oriented Diffusion Transformer for Video Generation</h2>
|
19 |
|
20 |
Zhenghao Zhang\*, Junchao Liao\*, Menghao Li, Zuozhuo Dai, Bingxue Qiu, Siyu Zhu, Long Qin, Weizhi Wang
|
21 |
|
22 |
\* equal contribution
|
23 |
+
<br>
|
24 |
|
25 |
<a href='https://arxiv.org/abs/2407.21705'><img src='https://img.shields.io/badge/ArXiv-2407.21705-red'></a>
|
26 |
<a href='https://ali-videoai.github.io/tora_video/'><img src='https://img.shields.io/badge/Project-Page-Blue'></a>
|
27 |
<a href="https://github.com/alibaba/Tora"><img src='https://img.shields.io/badge/Github-Link-orange'></a>
|
28 |
<a href='https://www.modelscope.cn/studios/xiaoche/Tora'><img src='https://img.shields.io/badge/🤖_ModelScope-ZH_demo-%23654dfc'></a>
|
29 |
<a href='https://www.modelscope.cn/studios/Alibaba_Research_Intelligence_Computing/Tora_En'><img src='https://img.shields.io/badge/🤖_ModelScope-EN_demo-%23654dfc'></a>
|
30 |
+
<br>
|
31 |
|
32 |
+
<a href='https://modelscope.cn/models/xiaoche/Tora'><img src='https://img.shields.io/badge/🤖_ModelScope-T2V/I2V_weights(SAT)-%23654dfc'></a>
|
33 |
+
<a href='https://modelscope.cn/models/Alibaba_Research_Intelligence_Computing/Tora_T2V_diffusers'><img src='https://img.shields.io/badge/🤖_ModelScope-T2V_weights(diffusers)-%23654dfc'></a>
|
34 |
+
<br>
|
35 |
|
36 |
+
<a href='https://huggingface.co/Alibaba-Research-Intelligence-Computing/Tora'><img src='https://img.shields.io/badge/🤗_HuggingFace-T2V/I2V_weights(SAT)-%23ff9e0e'></a>
|
37 |
+
<a href='https://huggingface.co/Alibaba-Research-Intelligence-Computing/Tora_T2V_diffusers'><img src='https://img.shields.io/badge/🤗_HuggingFace-T2V_weights(diffusers)-%23ff9e0e'></a>
|
38 |
</div>
|
39 |
|
40 |
## Please visit our [Github repo](https://github.com/alibaba/Tora) for more details.
|
|
|
45 |
|
46 |
## 📣 Updates
|
47 |
|
48 |
+
- `2025/01/06` 🔥🔥We released Tora Image-to-Video, including inference code and model weights.
|
49 |
+
- `2024/12/13` SageAttention2 and model compilation are supported in diffusers version. Tested on the A10, these approaches speed up every inference step by approximately 52%, except for the first step.
|
50 |
- `2024/12/09` 🔥🔥Diffusers version of Tora and the corresponding model weights are released. Inference VRAM requirements are reduced to around 5 GiB. Please refer to [this](diffusers-version/README.md) for details.
|
51 |
- `2024/11/25` 🔥Text-to-Video training code released.
|
52 |
- `2024/10/31` Model weights uploaded to [HuggingFace](https://huggingface.co/Le0jc/Tora). We also provided an English demo on [ModelScope](https://www.modelscope.cn/studios/Alibaba_Research_Intelligence_Computing/Tora_En).
|