Hello, can you give me some advice to do inference in using exported onnx model?

#1
by lieding1994 - opened

Hi, thank you for your genius way to export onnx model, I search the while internet bur found noway to export fine)tuned Florence2 model. I have exported my own fine-tuned Florence2 model, following the way introduced in this repo, but I find I have no idea how to use it, I succeed in getting onnxruntime result, but how to do next? use "processor" or others? I wait for your response.

Hey, this repo only contains the vision tower part of Florence (Step 1), I haven't finished the rest yet (Steps 2 and 3).

To completely export the Florence model to onnx I need to refactor the language model (Step 2) part of Florence (another repo maybe...) then rector the Florence model as a whole (Step 3).

Meaning the overall goal requires three parts:

  1. Recfactor & Export Vision tower. (Done)
  2. Refactor & Export Langauge Model. (Not Done)
  3. Recactor and Export the Overall model. (Not Done)

Note: Personally, I'm happy to run the rest of the code with the AutoProcessor already implemented by the creators (see the code below). As the Processor isn't computationally taxing/heavy.

Hope this answers your question.

import requests

import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM 


device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)

prompt = "<OD>"

url = "https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype)

# This is the part I want to run with onnx
generated_ids = model.generate(
    input_ids=inputs["input_ids"],
    pixel_values=inputs["pixel_values"],
    max_new_tokens=1024,
    num_beams=3,
    do_sample=False
)

generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]

parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))

print(parsed_answer)
´´´

Sign up or log in to comment