How can I deploy idefics2-8b with TensorRT + Triton?

#31
by catworld1212 - opened

How can I deploy idefics2-8b with TensorRT + Triton? It would be cool if you guys wrote a blog about deploying VLMs with TensorRT + Triton.

Hi @marksuccsmfewercoc
I am not familiar with tensorrt and triton.
@mfuntowicz or @regisss do we have resources on how someone would do that?

@mfuntowicz or @regisss Any idea about this?

You have 2 routes-
1-(Most preferred)- Export HF model to Onnx . Use TensorRT to generate an optimized engine file. Deploy on Triton using the required preprocessing(Current challenge- You cannot directly export to Onnx since Optimum hasnt added support to this model yet for export.
2-(Less Preferred due to lower performance)- Create a Python Backend on Triton using The HF libraries and run with Triton. There will be no acceleration, Just a better inference Serving

Hey @VictorSanh How do I give idefics2-8b previous chat context with images?

Hi @marksuccsmfewercoc
is https://huggingface.co./HuggingFaceM4/idefics2-8b#how-to-get-started (and more specifically the messages list for idefics2-8b) useful?

Hi @marksuccsmfewercoc
is https://huggingface.co./HuggingFaceM4/idefics2-8b#how-to-get-started (and more specifically the messages list for idefics2-8b) useful?

@VictorSanh I saw that, but I don't think it's working properly here is my code and it responded "I'm not sure what you mean by that. Can you please clarify?" when I ask all previous conversations are correct or not.

import requests
import torch
from PIL import Image
from io import BytesIO

from transformers import AutoProcessor, AutoModelForVision2Seq
from transformers.image_utils import load_image

DEVICE = "cuda:0"

Note that passing the image urls (instead of the actual pil images) to the processor is also possible

image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
image2 = load_image("https://cdn.britannica.com/59/94459-050-DBA42467/Skyline-Chicago.jpg")
image3 = load_image("https://cdn.britannica.com/68/170868-050-8DDE8263/Golden-Gate-Bridge-San-Francisco.jpg")

processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b")
model = AutoModelForVision2Seq.from_pretrained(
"HuggingFaceM4/idefics2-8b",
).to(DEVICE)

messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Where is this place?"},
]
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "London"},
]
},
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Where is this place?"},
]
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "San Francisco"},
]
},
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Do you think all the previous conversations we had all your answers were correct? what were the images in our previous conversations "},
]
},
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[image1, image2, image3], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}

generated_ids = model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)

print(generated_texts)

Hi @marksuccsmfewercoc ,
i think it would be worth reformulating your last query into a more grammatical sentence? i think it's confusing the model.
for instance, I tried "Do you think that in all the previous conversations we had, your answers were correct? If not, where were these images taken?"

Hey, @VictorSanh Can you please ask the TGI team to add an example of deploying idefics2 on TGI? I can't find any example to do that

It gives Unsupported model type with TGI using ghcr.io/huggingface/text-generation-inference:1.4 image

Sign up or log in to comment