--- pipeline_tag: image-text-to-text inference: false license: apache-2.0 ---

# LLaVA-Hound Model Card ## Model details **Model type:** LLaVA-Hound is an open-source video large multimodal model, fine-tuned from video instruction following data based on large language model. This model is the **SFT** version on **image and video instruction dataset** trained from **ShareGPTVideo/LLaVA-Hound-Pretrain**. Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co./lmsys/vicuna-7b-v1.5) **Model date:** Trained on March 15, 2024. **Paper or resources for more information:** Paper: https://huggingface.co./papers/2404.01258 Code: https://github.com/RifleZhang/LLaVA-Hound-DPO ## License [lmsys/vicuna-7b-v1.5](https://huggingface.co./lmsys/vicuna-7b-v1.5) license. **Where to send questions or comments about the model:** https://github.com/RifleZhang/LLaVA-Hound-DPO/issues ## Intended use **Primary intended uses:** Video (image) instruction-following. **Primary intended users:** Researchers in artificial intelligence, large multimodal model, etc. ## Training dataset ShareGPTVideo dataset. ## Evaluation Follow https://github.com/RifleZhang/LLaVA-Hound-DPO/blob/main/README.md ## Paper https://huggingface.co./papers/2404.01258 citation ``` @article{zhang2024direct, title={Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward}, author={Zhang, Ruohong and Gui, Liangke and Sun, Zhiqing and Feng, Yihao and Xu, Keyang and Zhang, Yuanhan and Fu, Di and Li, Chunyuan and Hauptmann, Alexander and Bisk, Yonatan and others}, journal={arXiv preprint arXiv:2404.01258}, year={2024} } ```