4-bit quantization of the original Molmo-7B-D-0924 model using bitsandbytes
.
NOTE: This quantization differs from the one here by slightly modifying the source code to remove unnecessary dependencies and otherwise make it work out of the box.
Original Model Card
Molmo 7B-D
Molmo is a family of open vision-language models developed by the Allen Institute for AI. Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. You can find all models in the Molmo family here. Learn more about the Molmo family in our announcement blog post or the paper.
Molmo 7B-D is based on Qwen2-7B and uses OpenAI CLIP as vision backbone. It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation. It powers the Molmo demo at molmo.allenai.org.
This checkpoint is a preview of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
Sign up here to be the first to know when artifacts are released.
Quick links:
- ๐ฌ Demo
- ๐ All Models
- ๐ Paper
- ๐ฅ Blog with Videos
Quick Start
To run Molmo, first install dependencies:
pip install einops torchvision
Then, follow these steps:
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
from PIL import Image
import requests
# load the processor
processor = AutoProcessor.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
# load the model
model = AutoModelForCausalLM.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
# process the image and text
inputs = processor.process(
images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
text="Describe this image."
)
# move inputs to the correct device and make a batch of size 1
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
# generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
# only get generated tokens; decode them to text
generated_tokens = output[0,inputs['input_ids'].size(1):]
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
# print the generated text
print(generated_text)
# >>> This image features an adorable black Labrador puppy, captured from a top-down
# perspective. The puppy is sitting on a wooden deck, which is composed ...
To make inference more efficient, run with autocast:
with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16):
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
We did most of our evaluation in this setting (autocast on, but float32 weights)
To even further reduce the memory requirements, the model can be run with bfloat16 weights:
model.to(dtype=torch.bfloat16)
inputs["images"] = inputs["images"].to(torch.bfloat16)
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
Note that we have observed that this can change the output of the model compared to running with float32 weights.
Evaluations
Model | Average Score on 11 Academic Benchmarks | Human Preference Elo Rating |
---|---|---|
Molmo 72B | 81.2 | 1077 |
Molmo 7B-D (this model) | 77.3 | 1056 |
Molmo 7B-O | 74.6 | 1051 |
MolmoE 1B | 68.6 | 1032 |
GPT-4o | 78.5 | 1079 |
GPT-4V | 71.1 | 1041 |
Gemini 1.5 Pro | 78.3 | 1074 |
Gemini 1.5 Flash | 75.1 | 1054 |
Claude 3.5 Sonnet | 76.7 | 1069 |
Claude 3 Opus | 66.4 | 971 |
Claude 3 Haiku | 65.3 | 999 |
Qwen VL2 72B | 79.4 | 1037 |
Qwen VL2 7B | 73.7 | 1025 |
Intern VL2 LLAMA 76B | 77.1 | 1018 |
Intern VL2 8B | 69.4 | 953 |
Pixtral 12B | 69.5 | 1016 |
Phi3.5-Vision 4B | 59.7 | 982 |
PaliGemma 3B | 50.0 | 937 |
LLAVA OneVision 72B | 76.6 | 1051 |
LLAVA OneVision 7B | 72.0 | 1024 |
Cambrian-1 34B | 66.8 | 953 |
Cambrian-1 8B | 63.4 | 952 |
xGen - MM - Interleave 4B | 59.5 | 979 |
LLAVA-1.5 13B | 43.9 | 960 |
LLAVA-1.5 7B | 40.7 | 951 |
Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).
FAQs
I'm getting an error a broadcast error when processing images!
Your image might not be in RGB format. You can convert it using the following code snippet:
from PIL import Image
image = Image.open(...)
if image.mode != "RGB":
image = image.convert("RGB")
Molmo doesn't work great with transparent images!
We received reports that Molmo models might struggle with transparent images. For the time being, we recommend adding a white or dark background to your images before passing them to the model. The code snippet below shows how to do this using the Python Imaging Library (PIL):
# Load the image
url = "..."
image = Image.open(requests.get(url, stream=True).raw)
# Convert the image to grayscale to calculate brightness
gray_image = image.convert('L') # Convert to grayscale
# Calculate the average brightness
stat = ImageStat.Stat(gray_image)
average_brightness = stat.mean[0] # Get the average value
# Define background color based on brightness (threshold can be adjusted)
bg_color = (0, 0, 0) if average_brightness > 127 else (255, 255, 255)
# Create a new image with the same size as the original, filled with the background color
new_image = Image.new('RGB', image.size, bg_color)
# Paste the original image on top of the background (use image as a mask if needed)
new_image.paste(image, (0, 0), image if image.mode == 'RGBA' else None)
# Now you can pass the new_image to Molmo
processor = AutoProcessor.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
License and Use
This model is licensed under Apache 2.0. It is intended for research and educational use. For more information, please see our Responsible Use Guidelines.
Usage:
The example script below requires an Nvidia GPU and that you pip install
the CUDA libraries into your virtual environment. Pip installing the CUDA libraries is NOT required if you install them systemwide (as most people do), in which case just remove the simply remove the set_cuda_paths
function. However, make sure that you've installed compatible CUDA and Pytorch versions.
COMPATIBLE CUDA AND PYTORCH 2.2.2 COMBINATIONS
Pytorch is only tested with specific versions of CUDA. When using pytorch 2.2.2, the following CUDA versions are required:
pip install nvidia-cublas-cu12==12.1.3.1
pip install nvidia-cuda-runtime-cu12==12.1.105
pip install nvidia-cuda-nvrtc-cu12==12.1.105
pip install nvidia-cudnn-cu12==8.9.2.26
- Then install
torch==2.2.2
,torchvision==0.17
, andtorchaudio==2.2.2
by visiting each of these three links and creating apip install
command based on the link for your Python version and platform.
For example, for Windows using Python 3.11 you would use the following:
pip install https://download.pytorch.org/whl/cu121/torch-2.2.2%2Bcu121-cp311-cp311-win_amd64.whl#sha256=efbcfdd4399197d06b32f7c0e1711c615188cdd65427b933648c7478fb880b3f
pip install https://download.pytorch.org/whl/cu121/torchvision-0.17.2%2Bcu121-cp311-cp311-win_amd64.whl#sha256=10ad542aab6b47dbe73c441381986d50a7ed5021cbe01d593a14477ec1f067a0
pip install https://download.pytorch.org/whl/cu121/torchaudio-2.2.2%2Bcu121-cp311-cp311-win_amd64.whl#sha256=c7dee68cd3d2b889bab71d4a0c345bdc3ea2fe79a62b921a6b49292c605b6071
COMPATIBLE CUDA AND PYTORCH 2.5.1 COMBINATIONS
Pytorch is only tested with specific versions of CUDA. When using pytorch 2.5.1, the following CUDA versions are required:
pip install nvidia-cublas-cu12==12.4.5.8
pip install nvidia-cuda-runtime-cu12==12.4.127
pip install nvidia-cuda-nvrtc-cu12==12.4.127
pip install nvidia-cudnn-cu12==9.1.0.70
- Then install
torch==2.5.1
,torchvision==0.20.1
, andtorchaudio==2.5.1
by visiting each of these three links and creating apip install
command based on the link for your Python version and platform.
For example, for Windows using Python 3.11 you would use the following:
pip install https://download.pytorch.org/whl/cu124/torch-2.5.1%2Bcu124-cp311-cp311-win_amd64.whl#sha256=6c8a7003ef1327479ede284b6e5ab3527d3900c2b2d401af15bcc50f2245a59f
pip install https://download.pytorch.org/whl/cu124/torchvision-0.20.1%2Bcu124-cp311-cp311-win_amd64.whl#sha256=15796b453a99ed0f0cbc249d129685ddc88157310135fb3addaf738a15db5306
pip install https://download.pytorch.org/whl/cu124/torchaudio-2.5.1%2Bcu124-cp311-cp311-win_amd64.whl#sha256=b3d75f4e6efc5412fe78c7f2787ee4f39cea1317652e1a47785879cde109f5c4
COMPATIBLE CUDA AND PYTORCH 2.6.0 COMBINATIONS
Pytorch is only tested with specific versions of CUDA. When using pytorch 2.5.1, the following CUDA versions are required:
pip install nvidia-cublas-cu12==12.6.4.1
pip install nvidia-cuda-runtime-cu12==12.6.77
pip install nvidia-cuda-nvrtc-cu12==12.6.77
pip install nvidia-cudnn-cu12==9.5.1.17
- Then install
torch==2.6.0
,torchvision==0.21.0
, andtorchaudio==2.6.0
by visiting each of these three links and creating apip install
command based on the link for your Python version and platform.
For example, for Windows using Python 3.11 you would use the following:
pip install https://download.pytorch.org/whl/cu126/torch-2.6.0%2Bcu126-cp311-cp311-win_amd64.whl#sha256=5ddca43b81c64df8ce0c59260566e648ee46b2622ab6a718e38dea3c0ca059a1
pip install https://download.pytorch.org/whl/cu126/torchvision-0.21.0%2Bcu126-cp311-cp311-win_amd64.whl#sha256=ddbf4516fbb7624ac42934b877dcf6a3b295d9914ab89643b55dedb9c9773ce4
pip install https://download.pytorch.org/whl/cu126/torchaudio-2.6.0%2Bcu126-cp311-cp311-win_amd64.whl#sha256=833b8e350c77021400fed2271df10ecd02b88f684bbc9d57132faa0efc9a0a57
Example script (process single image):
import sys
import os
from pathlib import Path
def set_cuda_paths():
venv_base = Path(sys.executable).parent.parent
nvidia_base_path = venv_base / 'Lib' / 'site-packages' / 'nvidia'
cuda_path = nvidia_base_path / 'cuda_runtime' / 'bin'
cublas_path = nvidia_base_path / 'cublas' / 'bin'
cudnn_path = nvidia_base_path / 'cudnn' / 'bin'
nvrtc_path = nvidia_base_path / 'cuda_nvrtc' / 'bin'
paths_to_add = [
str(cuda_path),
str(cublas_path),
str(cudnn_path),
str(nvrtc_path),
]
env_vars = ['CUDA_PATH', 'PATH']
for env_var in env_vars:
current_value = os.environ.get(env_var, '')
new_value = os.pathsep.join(paths_to_add + [current_value] if current_value else paths_to_add)
os.environ[env_var] = new_value
set_cuda_paths()
import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
model_path = r"[INSERT THE PATH TO THE FOLDER HOLDING THE MODEL FILES HERE]"
class VisionModel:
def __init__(self):
self.model = None
self.processor = None
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def initialize_model_and_processor(self):
self.processor = AutoProcessor.from_pretrained(
model_path,
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
self.model = AutoModelForCausalLM.from_pretrained(
model_path,
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
def process_single_image(self, image_path):
image = Image.open(image_path)
if image.mode != "RGB":
image = image.convert("RGB")
text = "Describe this image in detail as possible but be succinct and don't repeat yourself."
inputs = self.processor.process(images=[image], text=text)
inputs = {k: v.to(self.device).unsqueeze(0) for k, v in inputs.items()}
output = self.model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=500, stop_strings=["<|endoftext|>"]),
tokenizer=self.processor.tokenizer
)
generated_text = self.processor.tokenizer.decode(output[0, inputs['input_ids'].size(1):], skip_special_tokens=True)
print(f"\nGenerated Text:\n{generated_text}\n")
if __name__ == "__main__":
image_path = r"[INSERT THE PATH TO THE IMAGE YOU WANT TO PROCESS HERE]"
vision_model = VisionModel()
vision_model.initialize_model_and_processor()
vision_model.process_single_image(image_path)
- Downloads last month
- 71