cca-llava-1.5-7b / README.md
xing0047's picture
Add pipeline tag (#1)
4cfb4a6 verified
metadata
inference: false
pipeline_tag: image-text-to-text


LLaVA Model Card

Model details

Model type: Follows LLaVA, CCA-LLaVA(arxiv.org/abs/2410.15926) is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.

Model date: CCA-LLaVA-v1.5-7B was trained in April 2024.

Paper or resources for more information: https://github.com/xing0047/cca-llava.git

License

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Where to send questions or comments about the model: https://github.com/xing0047/cca-llava/issues

Intended use

Primary intended uses: The primary use of CCA-LLaVA is research on large multimodal models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Training dataset

  • 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
  • 158K GPT-generated multimodal instruction-following data.
  • 450K academic-task-oriented VQA data mixture.
  • 40K ShareGPT data.

Evaluation dataset

A collection of 8 benchmarks, including 3 visual hallucination benchmarks and 5 recent benchmarks specifically proposed for instruction-following LMMs.