metadata
license: apache-2.0
language:
- en
base_model:
- genmo/mochi-1-preview
pipeline_tag: text-to-video
tags:
- mochi
- t5
- gguf-comfy
- gguf-node
widget:
- text: >-
a fox moving quickly in a beautiful winter scenery nature trees sunset
tracking camera
output:
url: samples\ComfyUI_00001_.webp
- text: same prompt as 1st one <metadata inside>
output:
url: samples\ComfyUI_00002_.webp
gguf quantized version of t5xxl encoder with mochi (test pack)
setup (once)
- drag mochi_fp8.safetensors (10GB) to > ./ComfyUI/models/diffusion_models
- drag t5xxl_fp16-q4_0.gguf (2.9GB) to > ./ComfyUI/models/text_encoders
- drag mochi_vae_scaled.safetensors (725MB) to > ./ComfyUI/models/vae
run it straight (no installation needed way)
- run the .bat file in the main directory (assuming you are using the gguf-node pack below)
- drag the workflow json file (below) to > your browser
workflow
- example workflow (with gguf encoder)
- example workflow (safetensors)
review
- t5xxl gguf works fine as text encoder
- mochi gguf file might not work; if so, please wait for the code update
reference
- base model from genmo
- comfyui from comfyanonymous
- comfyui-gguf city96
- gguf-comfy pack
- gguf-node (pypi|repo|pack)
prompt test#
prompt: "a fox moving quickly in a beautiful winter scenery nature trees sunset tracking camera"
![](https://huggingface.co./calcuis/mochi/resolve/main/samples%5CComfyUI_00001_.webp)
- Prompt
- a fox moving quickly in a beautiful winter scenery nature trees sunset tracking camera
![](https://huggingface.co./calcuis/mochi/resolve/main/samples%5CComfyUI_00002_.webp)
- Prompt
- same prompt as 1st one <metadata inside>
![](https://huggingface.co./calcuis/mochi/resolve/main/samples%5CComfyUI_00003_.webp)
- Prompt
- same prompt as 1st one; but with new workflow to bypass oom <metadata inside>