Edit model card

Molmo-7B-D BnB 4bit quant 30GB -> 7GB

approx. 12GB VRAM required

base model for more information:

https://huggingface.co./allenai/Molmo-7B-D-0924

example code:

https://github.com/cyan2k/molmo-7b-bnb-4bit

performance metrics & benchmarks to compare with base will follow over the next week

Downloads last month
6,737
Safetensors
Model size
4.67B params
Tensor type
F32
·
U8
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for cyan2k/molmo-7B-D-bnb-4bit

Base model

Qwen/Qwen2-7B
Quantized
(3)
this model