Update README.md

#1
by AI-for-Lawyers - opened
Files changed (1) hide show
  1. README.md +228 -8
README.md CHANGED
@@ -10,22 +10,220 @@ tags:
10
  - chat
11
  - multimodal
12
  ---
13
- Quantization of the [original Molmo-7B-D-0924](https://huggingface.co/allenai/Molmo-7B-D-0924) model using ```bitsandbytes```.
14
 
15
- This model differs from the [one located here](https://huggingface.co/cyan2k/molmo-7B-D-bnb-4bit) in that it includes modified source code to reduce dependencies to achieve the same results and works out of the box.
16
 
17
- # NOTE:
18
 
19
- The example script below requires an Nvidia GPU and that you `pip install` the CUDA libraries into your virtual environment. This is NOT NECESSARY if you plan to install CUDA on a systemwide basis (as most people do). If you install CUDA systemwide, simply remove the ```set_cuda_paths``` function from the example script, but make sure that you've installed a proper version of CUDA and a compatible version of the Pytorch libraries.
20
 
21
- <details><summary>COMPATIBLE CUDA AND PYTORCH 2.2.2 VERSIONS</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  Pytorch is only tested with specific versions of CUDA. When using pytorch 2.2.2, the following CUDA versions are required:
24
 
25
  - ```pip install nvidia-cublas-cu12==12.1.3.1```
26
  - ```pip install nvidia-cuda-runtime-cu12==12.1.105```
27
  - ```pip install nvidia-cuda-nvrtc-cu12==12.1.105```
28
- - ```pip install nvidia-cufft-cu12==11.0.2.54```
29
  - ```pip install nvidia-cudnn-cu12==8.9.2.26```
30
  - Then install [`torch==2.2.2`](https://download.pytorch.org/whl/cu121/torch/), [`torchvision==0.17`](https://download.pytorch.org/whl/cu121/torchvision/), and [`torchaudio==2.2.2`](https://download.pytorch.org/whl/cu121/torchaudio/) by visiting each of these three links and creating a `pip install` command based on the link for your Python version and platform.
31
 
@@ -42,14 +240,13 @@ pip install https://download.pytorch.org/whl/cu121/torchaudio-2.2.2%2Bcu121-cp31
42
  ```
43
  </details>
44
 
45
- <details><summary>COMPATIBLE CUDA AND PYTORCH 2.5.1 VERSIONS</summary>
46
 
47
  Pytorch is only tested with specific versions of CUDA. When using pytorch 2.5.1, the following CUDA versions are required:
48
 
49
  - ```pip install nvidia-cublas-cu12==12.4.5.8```
50
  - ```pip install nvidia-cuda-runtime-cu12==12.4.127```
51
  - ```pip install nvidia-cuda-nvrtc-cu12==12.4.127```
52
- - ```pip install nvidia-cufft-cu12==11.2.1.3```
53
  - ```pip install nvidia-cudnn-cu12==9.1.0.70```
54
  - Then install [`torch==2.5.1`](https://download.pytorch.org/whl/cu124/torch/), [`torchvision==0.20.1`](https://download.pytorch.org/whl/cu124/torchvision/), and [`torchaudio==2.5.1`](https://download.pytorch.org/whl/cu124/torchaudio/) by visiting each of these three links and creating a `pip install` command based on the link for your Python version and platform.
55
 
@@ -66,6 +263,29 @@ pip install https://download.pytorch.org/whl/cu124/torchaudio-2.5.1%2Bcu124-cp31
66
  ```
67
  </details>
68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
  Example script (process single image):
71
 
 
10
  - chat
11
  - multimodal
12
  ---
13
+ 4-bit quantization of the original [Molmo-7B-D-0924](https://huggingface.co/allenai/Molmo-7B-D-0924) model using ```bitsandbytes```.
14
 
15
+ NOTE: This quantization differs from [the one here](https://huggingface.co/cyan2k/molmo-7B-D-bnb-4bit) by slightly modifying the source code to remove unnecessary dependencies and otherwise make it work out of the box.
16
 
17
+ <details><summary>Original Model Card</summary>
18
 
19
+ # Molmo 7B-D
20
 
21
+ Molmo is a family of open vision-language models developed by the Allen Institute for AI. Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
22
+ **Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog) or the [paper](https://huggingface.co/papers/2409.17146).
23
+
24
+ Molmo 7B-D is based on [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone.
25
+ It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.
26
+ It powers the **Molmo demo at** [**molmo.allenai.org**](https://molmo.allenai.org).
27
+
28
+ This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
29
+
30
+ [**Sign up here**](https://docs.google.com/forms/d/e/1FAIpQLSdML1MhNNBDsCHpgWG65Oydg2SjZzVasyqlP08nBrWjZp_c7A/viewform) to be the first to know when artifacts are released.
31
+
32
+ Quick links:
33
+ - 💬 [Demo](https://molmo.allenai.org/)
34
+ - 📂 [All Models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
35
+ - 📃 [Paper](https://molmo.allenai.org/paper.pdf)
36
+ - 🎥 [Blog with Videos](https://molmo.allenai.org/blog)
37
+
38
+
39
+ ## Quick Start
40
+
41
+ To run Molmo, first install dependencies:
42
+
43
+ ```bash
44
+ pip install einops torchvision
45
+ ```
46
+
47
+ Then, follow these steps:
48
+
49
+ ```python
50
+ from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
51
+ from PIL import Image
52
+ import requests
53
+
54
+ # load the processor
55
+ processor = AutoProcessor.from_pretrained(
56
+ 'allenai/Molmo-7B-D-0924',
57
+ trust_remote_code=True,
58
+ torch_dtype='auto',
59
+ device_map='auto'
60
+ )
61
+
62
+ # load the model
63
+ model = AutoModelForCausalLM.from_pretrained(
64
+ 'allenai/Molmo-7B-D-0924',
65
+ trust_remote_code=True,
66
+ torch_dtype='auto',
67
+ device_map='auto'
68
+ )
69
+
70
+ # process the image and text
71
+ inputs = processor.process(
72
+ images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
73
+ text="Describe this image."
74
+ )
75
+
76
+ # move inputs to the correct device and make a batch of size 1
77
+ inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
78
+
79
+ # generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
80
+ output = model.generate_from_batch(
81
+ inputs,
82
+ GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
83
+ tokenizer=processor.tokenizer
84
+ )
85
+
86
+ # only get generated tokens; decode them to text
87
+ generated_tokens = output[0,inputs['input_ids'].size(1):]
88
+ generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
89
+
90
+ # print the generated text
91
+ print(generated_text)
92
+
93
+ # >>> This image features an adorable black Labrador puppy, captured from a top-down
94
+ # perspective. The puppy is sitting on a wooden deck, which is composed ...
95
+ ```
96
+
97
+ To make inference more efficient, run with autocast:
98
+
99
+ ```python
100
+ with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16):
101
+ output = model.generate_from_batch(
102
+ inputs,
103
+ GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
104
+ tokenizer=processor.tokenizer
105
+ )
106
+ ```
107
+
108
+ We did most of our evaluation in this setting (autocast on, but float32 weights)
109
+
110
+ To even further reduce the memory requirements, the model can be run with bfloat16 weights:
111
+
112
+ ```python
113
+ model.to(dtype=torch.bfloat16)
114
+ inputs["images"] = inputs["images"].to(torch.bfloat16)
115
+ output = model.generate_from_batch(
116
+ inputs,
117
+ GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
118
+ tokenizer=processor.tokenizer
119
+ )
120
+ ```
121
+
122
+ Note that we have observed that this can change the output of the model compared to running with float32 weights.
123
+
124
+ ## Evaluations
125
+
126
+ | Model | Average Score on 11 Academic Benchmarks | Human Preference Elo Rating |
127
+ |-----------------------------|-----------------------------------------|-----------------------------|
128
+ | Molmo 72B | 81.2 | 1077 |
129
+ | **Molmo 7B-D (this model)** | **77.3** | **1056** |
130
+ | Molmo 7B-O | 74.6 | 1051 |
131
+ | MolmoE 1B | 68.6 | 1032 |
132
+ | GPT-4o | 78.5 | 1079 |
133
+ | GPT-4V | 71.1 | 1041 |
134
+ | Gemini 1.5 Pro | 78.3 | 1074 |
135
+ | Gemini 1.5 Flash | 75.1 | 1054 |
136
+ | Claude 3.5 Sonnet | 76.7 | 1069 |
137
+ | Claude 3 Opus | 66.4 | 971 |
138
+ | Claude 3 Haiku | 65.3 | 999 |
139
+ | Qwen VL2 72B | 79.4 | 1037 |
140
+ | Qwen VL2 7B | 73.7 | 1025 |
141
+ | Intern VL2 LLAMA 76B | 77.1 | 1018 |
142
+ | Intern VL2 8B | 69.4 | 953 |
143
+ | Pixtral 12B | 69.5 | 1016 |
144
+ | Phi3.5-Vision 4B | 59.7 | 982 |
145
+ | PaliGemma 3B | 50.0 | 937 |
146
+ | LLAVA OneVision 72B | 76.6 | 1051 |
147
+ | LLAVA OneVision 7B | 72.0 | 1024 |
148
+ | Cambrian-1 34B | 66.8 | 953 |
149
+ | Cambrian-1 8B | 63.4 | 952 |
150
+ | xGen - MM - Interleave 4B | 59.5 | 979 |
151
+ | LLAVA-1.5 13B | 43.9 | 960 |
152
+ | LLAVA-1.5 7B | 40.7 | 951 |
153
+
154
+ *Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).*
155
+
156
+ ## FAQs
157
+
158
+ ### I'm getting an error a broadcast error when processing images!
159
+
160
+ Your image might not be in RGB format. You can convert it using the following code snippet:
161
+
162
+ ```python
163
+ from PIL import Image
164
+
165
+ image = Image.open(...)
166
+
167
+ if image.mode != "RGB":
168
+ image = image.convert("RGB")
169
+ ```
170
+
171
+ ### Molmo doesn't work great with transparent images!
172
+
173
+ We received reports that Molmo models might struggle with transparent images.
174
+ For the time being, we recommend adding a white or dark background to your images before passing them to the model. The code snippet below shows how to do this using the Python Imaging Library (PIL):
175
+
176
+ ```python
177
+
178
+ # Load the image
179
+ url = "..."
180
+ image = Image.open(requests.get(url, stream=True).raw)
181
+
182
+ # Convert the image to grayscale to calculate brightness
183
+ gray_image = image.convert('L') # Convert to grayscale
184
+
185
+ # Calculate the average brightness
186
+ stat = ImageStat.Stat(gray_image)
187
+ average_brightness = stat.mean[0] # Get the average value
188
+
189
+ # Define background color based on brightness (threshold can be adjusted)
190
+ bg_color = (0, 0, 0) if average_brightness > 127 else (255, 255, 255)
191
+
192
+ # Create a new image with the same size as the original, filled with the background color
193
+ new_image = Image.new('RGB', image.size, bg_color)
194
+
195
+ # Paste the original image on top of the background (use image as a mask if needed)
196
+ new_image.paste(image, (0, 0), image if image.mode == 'RGBA' else None)
197
+
198
+ # Now you can pass the new_image to Molmo
199
+ processor = AutoProcessor.from_pretrained(
200
+ 'allenai/Molmo-7B-D-0924',
201
+ trust_remote_code=True,
202
+ torch_dtype='auto',
203
+ device_map='auto'
204
+ )
205
+ ```
206
+
207
+ ## License and Use
208
+
209
+ This model is licensed under Apache 2.0. It is intended for research and educational use.
210
+ For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
211
+
212
+ </details>
213
+
214
+ <br>
215
+
216
+ # Usage:
217
+
218
+ The example script below requires an Nvidia GPU and that you `pip install` the CUDA libraries into your virtual environment. Pip installing the CUDA libraries is NOT required if you install them systemwide (as most people do), in which case just remove the simply remove the ```set_cuda_paths``` function. However, make sure that you've installed compatible CUDA and Pytorch versions.
219
+
220
+ <details><summary>COMPATIBLE CUDA AND PYTORCH 2.2.2 COMBINATIONS</summary>
221
 
222
  Pytorch is only tested with specific versions of CUDA. When using pytorch 2.2.2, the following CUDA versions are required:
223
 
224
  - ```pip install nvidia-cublas-cu12==12.1.3.1```
225
  - ```pip install nvidia-cuda-runtime-cu12==12.1.105```
226
  - ```pip install nvidia-cuda-nvrtc-cu12==12.1.105```
 
227
  - ```pip install nvidia-cudnn-cu12==8.9.2.26```
228
  - Then install [`torch==2.2.2`](https://download.pytorch.org/whl/cu121/torch/), [`torchvision==0.17`](https://download.pytorch.org/whl/cu121/torchvision/), and [`torchaudio==2.2.2`](https://download.pytorch.org/whl/cu121/torchaudio/) by visiting each of these three links and creating a `pip install` command based on the link for your Python version and platform.
229
 
 
240
  ```
241
  </details>
242
 
243
+ <details><summary>COMPATIBLE CUDA AND PYTORCH 2.5.1 COMBINATIONS</summary>
244
 
245
  Pytorch is only tested with specific versions of CUDA. When using pytorch 2.5.1, the following CUDA versions are required:
246
 
247
  - ```pip install nvidia-cublas-cu12==12.4.5.8```
248
  - ```pip install nvidia-cuda-runtime-cu12==12.4.127```
249
  - ```pip install nvidia-cuda-nvrtc-cu12==12.4.127```
 
250
  - ```pip install nvidia-cudnn-cu12==9.1.0.70```
251
  - Then install [`torch==2.5.1`](https://download.pytorch.org/whl/cu124/torch/), [`torchvision==0.20.1`](https://download.pytorch.org/whl/cu124/torchvision/), and [`torchaudio==2.5.1`](https://download.pytorch.org/whl/cu124/torchaudio/) by visiting each of these three links and creating a `pip install` command based on the link for your Python version and platform.
252
 
 
263
  ```
264
  </details>
265
 
266
+ <details><summary>COMPATIBLE CUDA AND PYTORCH 2.6.0 COMBINATIONS</summary>
267
+
268
+ Pytorch is only tested with specific versions of CUDA. When using pytorch 2.5.1, the following CUDA versions are required:
269
+
270
+ - ```pip install nvidia-cublas-cu12==12.6.4.1```
271
+ - ```pip install nvidia-cuda-runtime-cu12==12.6.77```
272
+ - ```pip install nvidia-cuda-nvrtc-cu12==12.6.77```
273
+ - ```pip install nvidia-cudnn-cu12==9.5.1.17```
274
+ - Then install [`torch==2.6.0`](https://download.pytorch.org/whl/cu126/torch/), [`torchvision==0.21.0`](https://download.pytorch.org/whl/cu126/torchvision/), and [`torchaudio==2.6.0`](https://download.pytorch.org/whl/cu126/torchaudio/) by visiting each of these three links and creating a `pip install` command based on the link for your Python version and platform.
275
+
276
+ For example, for Windows using Python 3.11 you would use the following:
277
+
278
+ ```
279
+ pip install https://download.pytorch.org/whl/cu126/torch-2.6.0%2Bcu126-cp311-cp311-win_amd64.whl#sha256=5ddca43b81c64df8ce0c59260566e648ee46b2622ab6a718e38dea3c0ca059a1
280
+ ```
281
+ ```
282
+ pip install https://download.pytorch.org/whl/cu126/torchvision-0.21.0%2Bcu126-cp311-cp311-win_amd64.whl#sha256=ddbf4516fbb7624ac42934b877dcf6a3b295d9914ab89643b55dedb9c9773ce4
283
+ ```
284
+ ```
285
+ pip install https://download.pytorch.org/whl/cu126/torchaudio-2.6.0%2Bcu126-cp311-cp311-win_amd64.whl#sha256=833b8e350c77021400fed2271df10ecd02b88f684bbc9d57132faa0efc9a0a57
286
+ ```
287
+ </details>
288
+
289
 
290
  Example script (process single image):
291