File size: 4,911 Bytes
1d50033
f75c568
 
 
 
 
 
 
1d50033
f75c568
1d50033
5ebac5e
1d50033
5ebac5e
1d50033
5f5d133
1d50033
 
f75c568
1d50033
f75c568
 
 
1d50033
f75c568
 
1d50033
f75c568
5f5d133
1d50033
f75c568
1d50033
f75c568
1d50033
f75c568
1d50033
f75c568
 
 
 
1d50033
76fd6d2
f75c568
 
1d50033
f75c568
 
1d50033
f75c568
 
 
 
1d50033
f75c568
1d50033
f75c568
1d50033
f75c568
 
 
1d50033
f75c568
 
1d50033
f75c568
1d50033
f75c568
 
1d50033
f75c568
 
 
 
 
1d50033
f75c568
1d50033
 
f75c568
 
1d50033
f75c568
 
 
1d50033
f75c568
1d50033
f75c568
1d50033
f75c568
1d50033
f75c568
 
 
 
 
 
 
 
1d50033
f75c568
1d50033
f75c568
1d50033
f75c568
 
 
 
 
 
 
 
1d50033
f75c568
 
5f5d133
 
 
 
 
f3266d6
5f5d133
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
language:
- en
pipeline_tag: image-to-text
inference: false
arxiv: 2304.08485
datasets:
- HuggingFaceH4/llava-instruct-mix-vsft
---
# Model Card

HuggingFaceH4/vsft-llava-1.5-7b-hf-trl is a Vision Language Model, created by performing VSFT on the [llava-hf/llava-1.5-7b-hf](https://huggingface.co./llava-hf/llava-1.5-7b-hf) model with 260k image and conversation pairs from the [HuggingFaceH4/llava-instruct-mix-vsft](https://huggingface.co./datasets/HuggingFaceH4/llava-instruct-mix-vsft) dataset.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/q5GXv6Om4Hf2n6IB3e7DQ.png) 

Or check out our Spaces demo! [![Open in Spaces](https://huggingface.co./datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md-dark.svg)](https://huggingface.co./spaces/HuggingFaceH4/vlm-playground)


## Model details

**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.

**Model date:**
The model was trained on April the 11th 2024

**Example training script**
[Train a VLM yourself with our TRL example](https://github.com/huggingface/trl/blob/main/examples/scripts/vsft_llava.py)

## How to use the model

The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images:

### Using `pipeline`:

```python
from transformers import pipeline
from PIL import Image    
import requests

model_id = "HuggingFaceH4/vsft-llava-1.5-7b-hf-trl"
pipe = pipeline("image-to-text", model=model_id)
url = "https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"

image = Image.open(requests.get(url, stream=True).raw)
prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:"

outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs)
>>> {"generated_text": "\nUSER: What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: Lava"}
```

### Using pure `transformers`:

Below is an example script to run generation in `float16` precision on a GPU device:

```python
import requests
from PIL import Image

import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration

model_id = "HuggingFaceH4/vsft-llava-1.5-7b-hf-trl"

prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\nWhat are these?\nASSISTANT:"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"

model = LlavaForConditionalGeneration.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    low_cpu_mem_usage=True, 
).to(0)

processor = AutoProcessor.from_pretrained(model_id)


raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)

output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
```

### Model optimization

#### 4-bit quantization through `bitsandbytes` library

First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with: 

```diff
model = LlavaForConditionalGeneration.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    low_cpu_mem_usage=True,
+   load_in_4bit=True
)
```

#### Use Flash-Attention 2 to further speed-up generation

First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with: 

```diff
model = LlavaForConditionalGeneration.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    low_cpu_mem_usage=True,
+   use_flash_attention_2=True
).to(0)
```

## License
Llama 2 is licensed under the LLAMA 2 Community License, 
Copyright (c) Meta Platforms, Inc. All Rights Reserved.

## Citation
```
@misc{vonwerra2022trl,
  author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang},
  title = {TRL: Transformer Reinforcement Learning},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/trl}}
}
```