--- base_model: - meta-llama/Meta-Llama-3-8B-Instruct library_name: transformers license: cc-by-nc-sa-4.0 pipeline_tag: image-text-to-text --- # Model Card for Eagle-X2-Llama3-8B-ConsecutiveTableReadout-Mix-160k This model follows the adapter-based VLM structure from [LLaVA](https://github.com/haotian-liu/LLaVA) and [Eagle](https://github.com/NVlabs/EAGLE). This model uses [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct) as the base LLM and CLIP-448 (based on [CLIP-336](openai/clip-vit-large-patch14-336)) and [ConvNeXt](https://github.com/facebookresearch/ConvNeXt) as the visual encoders. ## Training Details We trained on [595k pretraining data](https://huggingface.co./datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K) and [1.8M visual instruction tuning data](https://huggingface.co./datasets/shi-labs/Eagle-1.8M). ## Citation Paper: [Generalizing from SIMPLE to HARD Visual Reasoning](https://arxiv.org/abs/2501.02669) ``` @misc{park2025generalizingsimplehardvisual, title={Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?}, author={Simon Park and Abhishek Panigrahi and Yun Cheng and Dingli Yu and Anirudh Goyal and Sanjeev Arora}, year={2025}, eprint={2501.02669}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2501.02669}, } ``` ## Contact Simon Park, Princeton University Abhishek Panigrahi, Princeton University Yun Cheng, Princeton University {juhyunp, ap34, yc6206} 'at' princeton 'dot' edu