Dedicated Inference Endpoints for Idefics2-8b

#32
by zesquirrelnator - opened

Hey team,

I'm running into the following error when attempting to deploy a dedicated inference endpoint:

Warning: deploying this model may fail because a "handler.py" file was not found in the repository. Try selecting a different model or creating a custom handler.

What would be the appropriate approach to making/getting a handler.py file that can handle text-image input?

@Leyo @m-ric Do you guys have any thoughts about this?

Same case for here.

Besides, I got this error: ValueError: The checkpoint you are trying to load has model type idefics2 but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

Have you upgraded Transformers to the latest version?

The fact is that I am trying to implement it in huggingface inference service. According to the model description, it supports the huggingface inference service. I can't modify the transformer version because of that.

It is true that I have the notification that the model repository does not have a handle.py file.

I honestly don't know if there is a good tutorial on how to create your own handle.py file and put a model into production. I have no idea how to do it.

It is true that I have the notification that the model repository does not have a handle.py file.

yes you need to create a custom handler @damianGil .
https://huggingface.co./docs/inference-endpoints/guides/custom_handler would be a good place to start!

I managed to deploy a dedicated inference endpoint for idefics2 by opening up the Advanced section and setting the task to Text Generation. You don’t need a custom handler for this.

As of today, the default docker image offered for TGI is too old, you will get an error like:

Endpoint failed to start
  KeyError: 'idefics2'

To solve that, you can provide a custom docker image so it will be the latest TGI.

  1. Change the Container type from Default to Custom
  2. Set the image to: http://ghcr.io/huggingface/text-generation-inference:latest
  3. Set these environmental variables:
"MAX_BATCH_PREFILL_TOKENS": "4096",
"MAX_INPUT_LENGTH": "3072",
"MAX_TOTAL_TOKENS": "3584",
"MODEL_ID": "/repository"

Awesome! Thanks!

Sign up or log in to comment