finalf0's picture
Update README.md
91abf6f verified
|
raw
history blame
No virus
1.09 kB
---
pipeline_tag: visual-question-answering
---
## MiniCPM-Llama3-V 2.5 int4
This is the int4 quantized version of [MiniCPM-Llama3-V 2.5](https://huggingface.co./openbmb/MiniCPM-Llama3-V-2_5).
Running with int4 version would use lower GPU mermory (about 9GB).
## Usage
Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10:
```
Pillow==10.1.0
torch==2.1.2
torchvision==0.16.2
transformers==4.40.0
sentencepiece==0.1.99
accelerate==0.30.1
bitsandbytes==0.43.1
```
```python
# test.py
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5-int4', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5-int4', trust_remote_code=True)
model.eval()
image = Image.open('xx.jpg').convert('RGB')
question = 'What is in the image?'
msgs = [{'role': 'user', 'content': question}]
res = model.chat(
image=image,
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
temperature=0.7
)
print(res)
```