--- license: mit --- ## MLCD-ViT-bigG Model Card ### 🙌 **[LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT) now supports MLCD-ViT-bigG-14-448px.** MLCD-ViT-bigG is a state-of-the-art vision transformer model enhanced with 2D Rotary Position Embedding (RoPE2D), achieving superior performance on document understanding and visual question answering tasks. Developed by DeepGlint AI, this model demonstrates exceptional capabilities in processing complex visual-language interactions. We adopted the official [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT) and the official training dataset [LLaVA-NeXT-Data](https://huggingface.co./datasets/lmms-lab/LLaVA-NeXT-Data) for evaluating the foundational visual models. The language model is Qwen2.5-7B. | Vision Tower | RoPE2D | ChartQA | DocVQA | InfoVQA | OCRBench | MMMU | | :-------------------------------------------------------------------------------------------- | :----: | :-------- | :-------- | :-------- | :--------- | :-------- | | CLIP (ViT-L-14-336px) | × | 66.52 | 75.21 | 38.88 | 525.00 | 44.20 | | SigLIP (ViT-SO400M-384px) | × | 69.28 | 76.71 | 41.38 | 554.00 | 46.78 | | DFN5B (ViT-H-14-378px) | × | 64.36 | 70.87 | 38.59 | 473.00 | **48.00** | | **[MLCD (ViT-L-14-336px)](https://huggingface.co./DeepGlint-AI/mlcd-vit-large-patch14-336)** | × | 67.84 | 76.46 | 43.48 | 531.00 | 44.30 | | **[MLCD (ViT-bigG-14-336px)](https://huggingface.co./DeepGlint-AI/mlcd-vit-bigG-patch14-336)** | √ | 71.07 | 79.63 | 44.38 | 572.00 | 46.78 | | **[MLCD (ViT-bigG-14-448px)](https://huggingface.co./DeepGlint-AI/mlcd-vit-bigG-patch14-448)** | √ | **73.80** | **83.34** | **46.59** | **582.00** | 46.00 | ## Installation ```shell pip install torch transformers git clone https://github.com/deepglint/unicom cd unicom/mlcd ``` ## Usage ```python from vit_rope2d_hf import MLCDVisionModel from transformers import CLIPImageProcessor from PIL import Image import requests import torch # Load model and processor model = MLCDVisionModel.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-448") processor = CLIPImageProcessor.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-448") # Process single image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") # Get visual features with torch.no_grad(): outputs = model(**inputs) features = outputs.last_hidden_state print(f"Extracted features shape: {features.shape}") ``` ## Citation ```latex @inproceedings{anxiang_2024_mlcd, title={Multi-label Cluster Discrimination for Visual Representation Learning}, author={An, Xiang and Yang, Kaicheng and Dai, Xiangzi and Feng, Ziyong and Deng, Jiankang}, booktitle={ECCV}, year={2024} } ```