|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- kaist-ai/Perception-Collection |
|
- kaist-ai/Perception-Bench |
|
language: |
|
- en |
|
metrics: |
|
- pearsonr |
|
- spearmanr |
|
library_name: transformers |
|
pipeline_tag: image-to-text |
|
tags: |
|
- Image-to-Text |
|
- Visual Question Answering |
|
- Text2Text Generation |
|
--- |
|
## Links for Reference |
|
- **Homepage:** |
|
- **Repository: https://github.com/kaistAI/prometheus-vision** |
|
- **Paper: https://arxiv.org/abs/2401.06591** |
|
- **Point of Contact: [email protected]** |
|
# TL;DR |
|
Prometheus-Vision is the first open-source VLM specialized for evaluation purposes. Prometheus-Vision shows a high correlation with both GPT-4V and human evaluators, indicating its potential to be used as a cheap alternative for GPT-4V evaluation. |
|
![image/png](./prometheus_vision.png) |
|
# Model Details |
|
|
|
## Model Description |
|
- **Model type:** Vision-Language Model |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache 2.0 |
|
- **Related Models:** [All Prometheus Checkpoints](https://huggingface.co./models?search=kaist-ai/Prometheus-Vision) |
|
- **Resources for more information:** |
|
- [Research paper](https://arxiv.org/abs/2401.06591) |
|
- [GitHub Repo](https://github.com/kaistAI/prometheus-vision) |
|
|
|
Prometheu-Vision is trained with two different sizes (7B and 13B). |
|
You could check the 7B sized VLM on [this page](https://huggingface.co./kaist-ai/prometheus-vision-7b-v1.0). |
|
Also, check out our dataset as well on [this page](https://huggingface.co./datasets/kaist-ai/Perception-Collection). |
|
## Prompt Format |
|
Prometheus-Vision requires 5 components in the input: An image, an instruction, a response to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below. |
|
You should fill in the instruction, response, reference answer, criteria description, and score description for score in range of 1 to 5. |
|
``` |
|
###Task Description: |
|
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, an image and a score rubric representing an evaluation criterion is given. |
|
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. |
|
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. |
|
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\" |
|
4. Please do not generate any other opening, closing, and explanations. |
|
|
|
###The instruction to evaluate: |
|
{instruction} |
|
|
|
###Response to evaluate: |
|
{response} |
|
|
|
###Reference Answer (Score 5): |
|
{reference_answer} |
|
|
|
###Score Rubrics: |
|
[{criteria_description}] |
|
Score 1: {score1_description} |
|
Score 2: {score2_description} |
|
Score 3: {score3_description} |
|
Score 4: {score4_description} |
|
Score 5: {score5_description} |
|
|
|
###Feedback: |
|
``` |
|
## License |
|
Perception Collection and Prometheus-Vision are subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us. |
|
# Usage |
|
Find below some example scripts on how to use the model in `transformers`: |
|
## Using the Pytorch model |
|
### Running the model on a GPU |
|
<details> |
|
<summary> Click to expand </summary> |
|
|
|
```python |
|
import argparse |
|
import torch |
|
import os |
|
import json |
|
from tqdm import tqdm |
|
import shortuuid |
|
|
|
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN |
|
from llava.conversation import conv_templates, SeparatorStyle |
|
from llava.model.builder import load_pretrained_model |
|
from llava.utils import disable_torch_init |
|
from llava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria |
|
|
|
from PIL import Image |
|
import math |
|
|
|
|
|
def split_list(lst, n): |
|
"""Split a list into n (roughly) equal-sized chunks""" |
|
chunk_size = math.ceil(len(lst) / n) # integer division |
|
return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)] |
|
|
|
|
|
def get_chunk(lst, n, k): |
|
chunks = split_list(lst, n) |
|
return chunks[k] |
|
|
|
|
|
def eval_model(args): |
|
# Model |
|
disable_torch_init() |
|
model_path = 'kaist-ai/prometheus-vision-13b-v1.0' |
|
model_name = 'llava-v1.5' |
|
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name) |
|
|
|
questions = [json.loads(q) for q in open(os.path.expanduser(args.question_file), "r")] |
|
questions = get_chunk(questions, args.num_chunks, args.chunk_idx) |
|
answers_file = os.path.expanduser(args.answers_file) |
|
os.makedirs(os.path.dirname(answers_file), exist_ok=True) |
|
ans_file = open(answers_file, "w") |
|
for line in tqdm(questions): |
|
idx = line["question_id"] |
|
image_file = line["image"] |
|
qs = line["text"] |
|
cur_prompt = qs |
|
if model.config.mm_use_im_start_end: |
|
qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs |
|
else: |
|
qs = DEFAULT_IMAGE_TOKEN + '\n' + qs |
|
|
|
conv = conv_templates[args.conv_mode].copy() |
|
conv.append_message(conv.roles[0], qs) |
|
conv.append_message(conv.roles[1], None) |
|
prompt = conv.get_prompt() |
|
|
|
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() |
|
|
|
image = Image.open(os.path.join(args.image_folder, image_file)) |
|
image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'][0] |
|
|
|
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 |
|
keywords = [stop_str] |
|
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) |
|
|
|
with torch.inference_mode(): |
|
output_ids = model.generate( |
|
input_ids, |
|
images=image_tensor.unsqueeze(0).half().cuda(), |
|
do_sample=True if args.temperature > 0 else False, |
|
temperature=args.temperature, |
|
top_p=args.top_p, |
|
num_beams=args.num_beams, |
|
# no_repeat_ngram_size=3, |
|
max_new_tokens=1024, |
|
use_cache=True) |
|
|
|
input_token_len = input_ids.shape[1] |
|
n_diff_input_output = (input_ids != output_ids[:, :input_token_len]).sum().item() |
|
if n_diff_input_output > 0: |
|
print(f'[Warning] {n_diff_input_output} output_ids are not the same as the input_ids') |
|
outputs = tokenizer.batch_decode(output_ids[:, input_token_len:], skip_special_tokens=True)[0] |
|
outputs = outputs.strip() |
|
if outputs.endswith(stop_str): |
|
outputs = outputs[:-len(stop_str)] |
|
outputs = outputs.strip() |
|
|
|
ans_id = shortuuid.uuid() |
|
ans_file.write(json.dumps({"question_id": idx, |
|
"prompt": cur_prompt, |
|
"text": outputs, |
|
"answer_id": ans_id, |
|
"model_id": model_name, |
|
"metadata": {}}) + "\n") |
|
ans_file.flush() |
|
ans_file.close() |
|
|
|
if __name__ == "__main__": |
|
parser = argparse.ArgumentParser() |
|
parser.add_argument("--model-path", type=str, default="facebook/opt-350m") |
|
parser.add_argument("--model-base", type=str, default=None) |
|
parser.add_argument("--image-folder", type=str, default="") |
|
parser.add_argument("--question-file", type=str, default="tables/question.jsonl") |
|
parser.add_argument("--answers-file", type=str, default="answer.jsonl") |
|
parser.add_argument("--conv-mode", type=str, default="llava_v1") |
|
parser.add_argument("--num-chunks", type=int, default=1) |
|
parser.add_argument("--chunk-idx", type=int, default=0) |
|
parser.add_argument("--temperature", type=float, default=0.2) |
|
parser.add_argument("--top_p", type=float, default=None) |
|
parser.add_argument("--num_beams", type=int, default=1) |
|
args = parser.parse_args() |
|
|
|
eval_model(args) |
|
|
|
``` |
|
</details> |
|
|
|
# Citation |
|
|
|
If you find the following model helpful, please consider citing our paper! |
|
|
|
**BibTeX:** |
|
|
|
```bibtex |
|
@misc{lee2024prometheusvision, |
|
title={Prometheus-Vision: Vision-Language Model as a Judge for Fine-Grained Evaluation}, |
|
author={Seongyun Lee and Seungone Kim and Sue Hyun Park and Geewook Kim and Minjoon Seo}, |
|
year={2024}, |
|
eprint={2401.06591}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |