Adapting Multimodal Large Language Models to Domains via Post-Training

This repos contains the food MLLM developed from Llama-3.2-11B-Vision-Instruct in our paper: On Domain-Specific Post-Training for Multimodal Large Language Models.

The main project page is: Adapt-MLLM-to-Domains

We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. (1) Data Synthesis: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs. (2) Training Pipeline: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. (3) Task Evaluation: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.

Resources

๐Ÿค— We share our data and models with example usages, feel free to open any issues or discussions! ๐Ÿค—

Model Repo ID in HF ๐Ÿค— Domain Base Model Training Data Evaluation Benchmark
Visual Instruction Synthesizer AdaptLLM/visual-instruction-synthesizer - open-llava-next-llama3-8b VisionFLAN and ALLaVA -
AdaMLLM-med-2B AdaptLLM/biomed-Qwen2-VL-2B-Instruct Biomedicine Qwen2-VL-2B-Instruct biomed-visual-instructions biomed-VQA-benchmark
AdaMLLM-food-2B AdaptLLM/food-Qwen2-VL-2B-Instruct Food Qwen2-VL-2B-Instruct food-visual-instructions food-VQA-benchmark
AdaMLLM-med-8B AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B Biomedicine open-llava-next-llama3-8b biomed-visual-instructions biomed-VQA-benchmark
AdaMLLM-food-8B AdaptLLM/food-LLaVA-NeXT-Llama3-8B Food open-llava-next-llama3-8b food-visual-instructions food-VQA-benchmark
AdaMLLM-med-11B AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct Biomedicine Llama-3.2-11B-Vision-Instruct biomed-visual-instructions biomed-VQA-benchmark
AdaMLLM-food-11B AdaptLLM/food-Llama-3.2-11B-Vision-Instruct Food Llama-3.2-11B-Vision-Instruct food-visual-instructions food-VQA-benchmark

Code: https://github.com/bigai-ai/QA-Synthesizer

1. To Chat with AdaMLLM

Our model architecture aligns with the base model: Llama-3.2-Vision-Instruct. We provide a usage example below, and you may refer to the official Llama-3.2-Vision-Instruct Repository for more advanced usage instructions.

Note: For AdaMLLM, always place the image at the beginning of the input instruction in the messages.

Click to expand

Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about.

Make sure to update your transformers installation via pip install --upgrade transformers.

import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor

model_id = "AdaptLLM/food-Llama-3.2-11B-Vision-Instruct"

model = MllamaForConditionalGeneration.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)

url = "https://huggingface.co./datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
image = Image.open(requests.get(url, stream=True).raw)

# NOTE: For AdaMLLM, always place the image at the beginning of the input instruction in the messages.  
messages = [
    {"role": "user", "content": [
        {"type": "image"},
        {"type": "text", "text": "If I had to write a haiku for this one, it would be: "}
    ]}
]
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(
    image,
    input_text,
    add_special_tokens=False,
    return_tensors="pt"
).to(model.device)

output = model.generate(**inputs, max_new_tokens=30)
print(processor.decode(output[0]))

2. To Evaluate Any MLLM on Domain-Specific Benchmarks

Refer to the food-VQA-benchmark to reproduce our results and evaluate many other MLLMs on domain-specific benchmarks.

Citation

If you find our work helpful, please cite us.

AdaMLLM

@article{adamllm,
  title={On Domain-Specific Post-Training for Multimodal Large Language Models},
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
  journal={arXiv preprint arXiv:2411.19930},
  year={2024}
}

Instruction Pre-Training (EMNLP 2024)

@article{cheng2024instruction,
  title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
  author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
  journal={arXiv preprint arXiv:2406.14491},
  year={2024}
}

Adapt LLM to Domains (ICLR 2024)

@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
Downloads last month
183
Safetensors
Model size
10.7B params
Tensor type
BF16
ยท
Inference API
Unable to determine this model's library. Check the docs .

Model tree for AdaptLLM/food-Llama-3.2-11B-Vision-Instruct

Finetuned
(70)
this model