MLCD-ViT-bigG Model Card

πŸ™Œ LLaVA-NeXT now supports MLCD-ViT-bigG-14-448px.

MLCD-ViT-bigG is a state-of-the-art vision transformer model enhanced with 2D Rotary Position Embedding (RoPE2D), achieving superior performance on document understanding and visual question answering tasks. Developed by DeepGlint AI, this model demonstrates exceptional capabilities in processing complex visual-language interactions.

We adopted the official LLaVA-NeXT and the official training dataset LLaVA-NeXT-Data for evaluating the foundational visual models.
The language model is Qwen2.5-7B.

Vision Tower RoPE2D ChartQA DocVQA InfoVQA OCRBench MMMU
CLIP (ViT-L-14-336px) Γ— 66.52 75.21 38.88 525.00 44.20
SigLIP (ViT-SO400M-384px) Γ— 69.28 76.71 41.38 554.00 46.78
DFN5B (ViT-H-14-378px) Γ— 64.36 70.87 38.59 473.00 48.00
MLCD (ViT-L-14-336px) Γ— 67.84 76.46 43.48 531.00 44.30
MLCD (ViT-bigG-14-336px) √ 71.07 79.63 44.38 572.00 46.78
MLCD (ViT-bigG-14-448px) √ 73.80 83.34 46.59 582.00 46.00

Installation

pip install torch transformers
git clone https://github.com/deepglint/unicom
cd unicom/mlcd

Usage

from vit_rope2d_hf import MLCDVisionModel
from transformers import CLIPImageProcessor
from PIL import Image
import requests
import torch

# Load model and processor
model = MLCDVisionModel.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-448")
processor = CLIPImageProcessor.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-448")

# Process single image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")

# Get visual features
with torch.no_grad():
    outputs = model(**inputs)
features = outputs.last_hidden_state

print(f"Extracted features shape: {features.shape}")

Citation

@inproceedings{anxiang_2024_mlcd,
  title={Multi-label Cluster Discrimination for Visual Representation Learning},
  author={An, Xiang and Yang, Kaicheng and Dai, Xiangzi and Feng, Ziyong and Deng, Jiankang},
  booktitle={ECCV},
  year={2024}
}
Downloads last month
734
Safetensors
Model size
1.84B params
Tensor type
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Collection including DeepGlint-AI/mlcd-vit-bigG-patch14-448