kyujinpy commited on
Commit
e63b9dc
β€’
1 Parent(s): 003b5fb

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -10
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: creativeml-openrail-m
3
  base_model: Bingsu/my-korean-stable-diffusion-v1-5
4
- training_prompt: A rabbit is eating a watermelon on the table
5
  tags:
6
  - tune-a-video
7
  - text-to-video
@@ -11,20 +11,23 @@ inference: false
11
  ---
12
 
13
  # Tune-A-VideKO - Korean Stable Diffusion v1-5
14
- Github: [Kyujinpy/Tune-A-VideKO](https://github.com/KyujinHan/Tune-A-VideKO/tree/master)
15
 
16
  ## Model Description
17
  - Base model: [Bingsu/my-korean-stable-diffusion-v1-5](https://huggingface.co/Bingsu/my-korean-stable-diffusion-v1-5)
18
- - Training prompt: A rabbit is eating a watermelon on the table
19
- ![sample-train](sample/rabbit.gif)
20
 
21
  ## Samples
22
 
23
- ![sample-500](sample/video4.gif)
24
- Test prompt: 고양이가 ν•΄λ³€μ—μ„œ μˆ˜λ°•μ„ λ¨Ήκ³  μžˆμŠ΅λ‹ˆλ‹€
25
 
26
- ![sample-500](sample/video5.gif)
27
- Test prompt: 강아지가 μ˜€λ Œμ§€λ₯Ό λ¨Ήκ³  μžˆμŠ΅λ‹ˆλ‹€
 
 
 
28
 
29
  ## Usage
30
  Clone the github repo
@@ -46,8 +49,8 @@ unet = UNet3DConditionModel.from_pretrained(unet_model_path, subfolder='unet', t
46
  pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")
47
  pipe.enable_xformers_memory_efficient_attention()
48
 
49
- prompt = "강아지가 λ§Œν™” μŠ€νƒ€μΌλ‘œ μƒμžλ₯Ό λ¨Ήκ³  μžˆμŠ΅λ‹ˆλ‹€"
50
- video = pipe(prompt, video_length=8, height=512, width=512, num_inference_steps=50, guidance_scale=12.5).videos
51
 
52
  save_videos_grid(video, f"./{prompt}.gif")
53
  ```
 
1
  ---
2
  license: creativeml-openrail-m
3
  base_model: Bingsu/my-korean-stable-diffusion-v1-5
4
+ training_prompt: A man is surfing
5
  tags:
6
  - tune-a-video
7
  - text-to-video
 
11
  ---
12
 
13
  # Tune-A-VideKO - Korean Stable Diffusion v1-5
14
+ Github: [Kyujinpy/Tune-A-VideKO](https://github.com/KyujinHan/Tune-A-VideKO)
15
 
16
  ## Model Description
17
  - Base model: [Bingsu/my-korean-stable-diffusion-v1-5](https://huggingface.co/Bingsu/my-korean-stable-diffusion-v1-5)
18
+ - Training prompt: A man is surfing
19
+ ![sample-train](sample/surfing.gif)
20
 
21
  ## Samples
22
 
23
+ ![sample-500](sample/video10.gif)
24
+ Test prompt: λ―Έν‚€λ§ˆμš°μŠ€κ°€ μ„œν•‘μ„ 타고 μžˆμŠ΅λ‹ˆλ‹€
25
 
26
+ ![sample-500](sample/video11.gif)
27
+ Test prompt: ν•œ μ—¬μžκ°€ μ„œν•‘μ„ 타고 μžˆμŠ΅λ‹ˆλ‹€
28
+
29
+ ![sample-500](sample/video12.gif)
30
+ Test prompt: 흰색 μ˜·μ„ μž…μ€ λ‚¨μžκ°€ λ°”λ‹€λ₯Ό κ±·κ³  μžˆμŠ΅λ‹ˆλ‹€
31
 
32
  ## Usage
33
  Clone the github repo
 
49
  pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")
50
  pipe.enable_xformers_memory_efficient_attention()
51
 
52
+ prompt = "흰색 μ˜·μ„ μž…μ€ λ‚¨μžκ°€ λ°”λ‹€λ₯Ό κ±·κ³  μžˆμŠ΅λ‹ˆλ‹€"
53
+ video = pipe(prompt, video_length=24, height=512, width=512, num_inference_steps=50, guidance_scale=12.5).videos
54
 
55
  save_videos_grid(video, f"./{prompt}.gif")
56
  ```