--- license: mit library_name: transformers pipeline_tag: image-text-to-text --- ![header](./assets/header.png)

๐Ÿ“ƒ Paper โ€ข ๐ŸŒ Demo โ€ข ๐Ÿ“ƒ LongLLaVA

![efficiency](./assets/singleGPU.png) ## ๐ŸŒˆ Update * **[2024.09.05]** LongLLaVA repo is published๏ผ๐ŸŽ‰ ## Architecture
Click to view the architecture image ![Architecture Image](./assets/arch.png)
## Results
Click to view the Results - Main Results ![Main Results](./assets/result1.png) - Diagnostic Results ![Diagnostic Results](./assets/diaresult.png) - Video-NIAH ![Video-NIAH](./assets/NIAH.png)
## Evaluation and demo > Coming Soon~ ## To do [] Release inference code ## Citation ``` @misc{wang2024longllavascalingmultimodalllms, title={LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture}, author={Xidong Wang and Dingjie Song and Shunian Chen and Chen Zhang and Benyou Wang}, year={2024}, eprint={2409.02889}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2409.02889}, } ```