|
--- |
|
tags: |
|
- image-to-text |
|
- visual-question-answering |
|
- image-captioning |
|
datasets: |
|
- kaist-ai/volcano-train |
|
language: |
|
- en |
|
pipeline_tag: image-to-text |
|
library_name: transformers |
|
--- |
|
## Links for Reference |
|
|
|
- **Repository: https://github.com/kaistAI/Volcano** |
|
- **Paper: https://arxiv.org/abs/2311.07362** |
|
|
|
# Overview |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550c4f27bbfce1878f5f280/AnqbCNf6pRiQ_5uNX0r4d.png) |
|
Volcano employs a single LMM to generate initial responses, feedback, and revisions, as well as decisions to accept revisions. It follows a sequential procedure of an iterative critique-revision-decide loop. |
|
|
|
# Model details |
|
|
|
**Model type:** |
|
Volcano-7b is a multimodal self-feedback guided revision model that was fine-tuned by mixing the visual instruction tuning dataset used in [LLaVA-v1.5](https://llava-vl.github.io/) with multimodal feedback and revision data collected through [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5), applied to the [vicuna-7b-v1.5](https://huggingface.co./lmsys/vicuna-7b-v1.5) model. |
|
|
|
**Model date:** |
|
Volcano-7b was trained in October 2023. |
|
|
|
# Training dataset |
|
- **274K multimodal feedback and revision data** |
|
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. |
|
- 158K GPT-generated multimodal instruction-following data. |
|
- 450K academic-task-oriented VQA data mixture. |
|
- 40K ShareGPT data |
|
|
|
You can find [here](https://huggingface.co./datasets/kaist-ai/volcano-train) the dataset used to train Volcano, which includes all the aforementioned datasets. |
|
|
|
# Evaluation dataset |
|
A collection of three multimodal hallucination benchmarks ([MMHal-Bench](https://huggingface.co./datasets/Shengcao1006/MMHal-Bench), [Pope](https://github.com/RUCAIBox/POPE), [GAVIE](https://github.com/FuxiaoLiu/LRV-Instruction)) and two multimodal understanding benchmarks ([MM-Vet](https://github.com/yuweihao/MM-Vet), [MMBench](https://github.com/open-compass/MMBench)). |