YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co./docs/hub/model-cards#model-card-metadata)
HunyuanVideo t2v lora tuned by
https://huggingface.co./datasets/svjack/Genshin-Impact-XiangLing-animatediff-with-score-organized-Low-Resolution
In early step
Installation
Prerequisites
Before you begin, ensure you have the following installed:
git-lfs
cbm
ffmpeg
You can install these prerequisites using the following command:
sudo apt-get update && sudo apt-get install git-lfs cbm ffmpeg
Installation Steps
Install
comfy-cli
:pip install comfy-cli
Initialize ComfyUI:
comfy --here install
Clone and Install ComfyScript:
cd ComfyUI/custom_nodes git clone https://github.com/Chaoses-Ib/ComfyScript.git cd ComfyScript pip install -e ".[default,cli]" pip uninstall aiohttp pip install -U aiohttp
Clone and Install ComfyUI-HunyuanVideoWrapper:
cd ../ git clone https://github.com/svjack/ComfyUI-HunyuanVideoWrapper cd ComfyUI-HunyuanVideoWrapper pip install -r requirements.txt
Load ComfyScript Runtime:
from comfy_script.runtime import * load() from comfy_script.runtime.nodes import *
Install Example Dependencies:
cd examples comfy node install-deps --workflow='hunyuanvideo lora Walking Animation Share.json'
Update ComfyUI Dependencies:
cd ../../ComfyUI pip install --upgrade torch torchvision torchaudio -r requirements.txt
Transpile Example Workflow:
python -m comfy_script.transpile hyvideo_t2v_example_01.json
Download and Place Model Files:
Download the required model files from Hugging Face:
huggingface-cli download Kijai/HunyuanVideo_comfy --local-dir ./HunyuanVideo_comfy
Copy the downloaded files to the appropriate directories:
cp -r HunyuanVideo_comfy/ . cp HunyuanVideo_comfy/hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors ComfyUI/models/diffusion_models cp HunyuanVideo_comfy/hunyuan_video_vae_bf16.safetensors ComfyUI/models/vae
Genshin Impact Character XiangLing LoRA Example (early tuned version)
- Download the Makima LoRA Model:
Download the Makima LoRA model from Huggingface:
xiangling_ep2_lora.safetensors
Copy the model to the loras
directory:
cp xiangling_ep2_lora.safetensors ComfyUI/models/loras
- Run the Workflow:
Create a Python script run_t2v_xiangling_lora.py
:
#### cook rice
from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *
with Workflow():
vae = HyVideoVAELoader(r'hunyuan_video_vae_bf16.safetensors', 'bf16', None)
lora = HyVideoLoraSelect('xiangling_ep2_lora.safetensors', 2.0, None, None)
model = HyVideoModelLoader(r'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors', 'bf16', 'fp8_e4m3fn', 'offload_device', 'sdpa', None, None, lora)
hyvid_text_encoder = DownloadAndLoadHyVideoTextEncoder('Kijai/llava-llama-3-8b-text-encoder-tokenizer', 'openai/clip-vit-large-patch14', 'fp16', False, 2, 'disabled')
hyvid_embeds = HyVideoTextEncode(hyvid_text_encoder, "solo,Xiangling, cook rice in a pot, (genshin impact) ,1girl,highres, dynamic", 'bad quality video', 'video', None, None, None)
samples = HyVideoSampler(model, hyvid_embeds, 478, 512, 49, 25, 8, 9, 42, 1, None, 1, None)
images = HyVideoDecode(vae, samples, True, 64, 256, True)
#_ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None)
_ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None,
pix_fmt = 'yuv420p', crf=19, save_metadata = True, trim_to_audio = False)
Run the script:
python run_t2v_xiangling_lora.py
- prompt = "solo,Xiangling, cook rice in a pot, (genshin impact) ,1girl,highres, dynamic"
#### drink water
from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *
with Workflow():
vae = HyVideoVAELoader(r'hunyuan_video_vae_bf16.safetensors', 'bf16', None)
lora = HyVideoLoraSelect('xiangling_ep2_lora.safetensors', 2.0, None, None)
model = HyVideoModelLoader(r'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors', 'bf16', 'fp8_e4m3fn', 'offload_device', 'sdpa', None, None, lora)
hyvid_text_encoder = DownloadAndLoadHyVideoTextEncoder('Kijai/llava-llama-3-8b-text-encoder-tokenizer', 'openai/clip-vit-large-patch14', 'fp16', False, 2, 'disabled')
hyvid_embeds = HyVideoTextEncode(hyvid_text_encoder,
"solo,Xiangling, drink water, (genshin impact) ,1girl,highres, dynamic",
'bad quality video', 'video', None, None, None)
samples = HyVideoSampler(model, hyvid_embeds, 512, 512, 49, 30, 8, 9, 42, 1, None, 1, None)
images = HyVideoDecode(vae, samples, True, 64, 256, True)
#_ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None)
_ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None,
pix_fmt = 'yuv420p', crf=19, save_metadata = True, trim_to_audio = False)
Run the script:
python run_t2v_xiangling_lora.py
- prompt = "solo,Xiangling, drink water, (genshin impact) ,1girl,highres, dynamic"
#### eat bread
from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *
with Workflow():
vae = HyVideoVAELoader(r'hunyuan_video_vae_bf16.safetensors', 'bf16', None)
lora = HyVideoLoraSelect('xiangling_ep2_lora.safetensors', 2.0, None, None)
model = HyVideoModelLoader(r'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors', 'bf16', 'fp8_e4m3fn', 'offload_device', 'sdpa', None, None, lora)
hyvid_text_encoder = DownloadAndLoadHyVideoTextEncoder('Kijai/llava-llama-3-8b-text-encoder-tokenizer', 'openai/clip-vit-large-patch14', 'fp16', False, 2, 'disabled')
hyvid_embeds = HyVideoTextEncode(hyvid_text_encoder,
"solo,Xiangling, eat bread, (genshin impact) ,1girl,highres, dynamic",
'bad quality video', 'video', None, None, None)
samples = HyVideoSampler(model, hyvid_embeds, 512, 512, 49, 30, 10, 20, 42, 1, None, 1, None)
images = HyVideoDecode(vae, samples, True, 64, 256, True)
#_ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None)
_ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None,
pix_fmt = 'yuv420p', crf=19, save_metadata = True, trim_to_audio = False)
Run the script:
python run_t2v_xiangling_lora.py
- prompt = "solo,Xiangling, eat bread, (genshin impact) ,1girl,highres, dynamic"
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.