Pangea-7B Model Card
Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages
๐ช๐น ๐ธ๐ฆ ๐ง๐ฌ ๐ง๐ฉ ๐จ๐ฟ ๐ฉ๐ช ๐ฌ๐ท ๐ฌ๐ง ๐บ๐ธ ๐ช๐ธ ๐ฎ๐ท ๐ซ๐ท ๐ฎ๐ช ๐ฎ๐ณ ๐ฎ๐ฉ ๐ณ๐ฌ ๐ฎ๐น ๐ฎ๐ฑ ๐ฏ๐ต ๐ฎ๐ฉ ๐ฐ๐ท ๐ณ๐ฑ ๐ฒ๐ณ ๐ฒ๐พ ๐ณ๐ด ๐ต๐ฑ ๐ต๐น ๐ง๐ท ๐ท๐ด ๐ท๐บ ๐ฑ๐ฐ ๐ฎ๐ฉ ๐ฐ๐ช ๐น๐ฟ ๐ฑ๐ฐ ๐น๐ญ ๐น๐ท ๐บ๐ฆ ๐ต๐ฐ ๐ป๐ณ ๐จ๐ณ ๐น๐ผ
๐ Homepage | ๐ค Pangea-7B | ๐ PangeaIns | ๐งช PangeaBench | ๐ป Github | ๐ Arxiv | ๐ PDF | ๐ฅ๏ธ Demo
Model details
- Model: Pangea is a fully open-source Multilingual Multimodal Multicultural LLM.
- Date: Pangea-7B was trained in 2024.
- Training Dataset: 6M PangeaIns.
- Architecture: Pangea-7B follows the architecture of LLaVA-NeXT, with a Qwen2-7B-Instruct backbone.
Uses
Pangea-7B follows the architecture of LLaVA-NeXT.
You could either (1) follow the same model loading procedures as of LLaVA-NeXT, an example of loading Pangea-7B directly is shown in the Python code below, or (2) use our hf version of Pangea-7B: [Pangea-7B-hf]https://huggingface.co./neulab/Pangea-7B-hf
Direct Use
First you would need to clone and install LLaVA-NeXT.
git clone https://github.com/LLaVA-VL/LLaVA-NeXT
cd LLaVA-NeXT
pip install -e ".[train]"
Then, you could load Pangea-7B using the following code:
from llava.model.builder import load_pretrained_model
model_path = 'neulab/Pangea-7B'
model_name = 'Pangea-7B-qwen'
args = {"multimodal": True}
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, None, model_name, **args)
Defining some helper functions for using the model:
import torch
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
from llava.utils import disable_torch_init
from llava.constants import IGNORE_INDEX, DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX
from typing import Dict
import transformers
import re
from PIL import Image
def preprocess_qwen(sources, tokenizer: transformers.PreTrainedTokenizer, has_image: bool = False, max_len=2048, system_message: str = "You are a helpful assistant.") -> Dict:
roles = {"human": "<|im_start|>user", "gpt": "<|im_start|>assistant"}
im_start, im_end = tokenizer.additional_special_tokens_ids
nl_tokens = tokenizer("\n").input_ids
_system = tokenizer("system").input_ids + nl_tokens
_user = tokenizer("user").input_ids + nl_tokens
_assistant = tokenizer("assistant").input_ids + nl_tokens
input_ids = []
source = sources
if roles[source[0]["from"]] != roles["human"]: source = source[1:]
input_id, target = [], []
system = [im_start] + _system + tokenizer(system_message).input_ids + [im_end] + nl_tokens
input_id += system
target += [im_start] + [IGNORE_INDEX] * (len(system) - 3) + [im_end] + nl_tokens
assert len(input_id) == len(target)
for j, sentence in enumerate(source):
role = roles[sentence["from"]]
if has_image and sentence["value"] is not None and "<image>" in sentence["value"]:
num_image = len(re.findall(DEFAULT_IMAGE_TOKEN, sentence["value"]))
texts = sentence["value"].split('<image>')
_input_id = tokenizer(role).input_ids + nl_tokens
for i,text in enumerate(texts):
_input_id += tokenizer(text).input_ids
if i<len(texts)-1: _input_id += [IMAGE_TOKEN_INDEX] + nl_tokens
_input_id += [im_end] + nl_tokens
assert sum([i==IMAGE_TOKEN_INDEX for i in _input_id])==num_image
else:
if sentence["value"] is None: _input_id = tokenizer(role).input_ids + nl_tokens
else: _input_id = tokenizer(role).input_ids + nl_tokens + tokenizer(sentence["value"]).input_ids + [im_end] + nl_tokens
input_id += _input_id
input_ids.append(input_id)
return torch.tensor(input_ids, dtype=torch.long)
def generate_output(prompt, image=None, do_sample=False, temperature=0, top_p=0.5, num_beams=1, max_new_tokens=1024):
image_tensors = []
prompt = "<image>\n" + prompt
image = Image.open(image)
image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values']
image_tensors.append(image_tensor.half().cuda())
input_ids = preprocess_qwen([{'from': 'human', 'value': prompt},{'from': 'gpt','value': None}], tokenizer, has_image=True).cuda()
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=image_tensors,
do_sample=do_sample,
temperature=temperature,
top_p=top_p,
num_beams=num_beams,
max_new_tokens=max_new_tokens,
use_cache=True
)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
outputs = outputs.strip()
return outputs
Now, an example of using the model:
prompt = "What did you see in the image?"
image = "image.png"
print(generate_output(prompt, image=image))
Note that the example above demonstrates multimodal usage. To use the model with text-only inputs, you would need to reload the model with :
args = {"multimodal": True}
tokenizer, model, _, context_len = load_pretrained_model(model_path, None, model_name, **args)
def generate_output_text_only(prompt, do_sample=False, temperature=0, top_p=0.5, num_beams=1, max_new_tokens=1024):
input_ids = preprocess_qwen([{'from': 'human', 'value': prompt},{'from': 'gpt','value': None}], tokenizer, has_image=False).cuda()
with torch.inference_mode():
generated_ids = model.generate(
input_ids,
do_sample=do_sample,
temperature=temperature,
top_p=top_p,
num_beams=num_beams,
max_new_tokens=max_new_tokens,
use_cache=True
)
generated_ids = [output_ids[len(input_ids) :] for input_ids, output_ids in zip(input_ids, generated_ids)]
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
outputs = outputs.strip()
return outputs
prompt = "Write me a python function that could sort a input integer list by descending order"
print(generate_output_text_only(prompt))
Citing the Model
BibTeX Citation:
@article{yue2024pangeafullyopenmultilingual,
title={Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages},
author={Xiang Yue and Yueqi Song and Akari Asai and Seungone Kim and Jean de Dieu Nyandwi and Simran Khanuja and Anjali Kantharuban and Lintang Sutawika and Sathyanarayanan Ramamoorthy and Graham Neubig},
year={2024},
journal={arXiv preprint arXiv:2410.16153},
url={https://arxiv.org/abs/2410.16153}
}
- Downloads last month
- 6,147