Update README.md
Browse files
README.md
CHANGED
@@ -10,4 +10,78 @@ tags:
|
|
10 |
- vision
|
11 |
base_model:
|
12 |
- microsoft/Phi-3.5-vision-instruct
|
13 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
- vision
|
11 |
base_model:
|
12 |
- microsoft/Phi-3.5-vision-instruct
|
13 |
+
---
|
14 |
+
|
15 |
+
# Phi-3.5-vision-instruct-ov-fp16
|
16 |
+
|
17 |
+
* Model creator: [Microsoft](https://huggingface.co/microsoft)
|
18 |
+
* Original model: [Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
|
19 |
+
|
20 |
+
## Description
|
21 |
+
|
22 |
+
This is [microsoft/Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format.
|
23 |
+
|
24 |
+
## Compatibility
|
25 |
+
|
26 |
+
The provided OpenVINO™ IR model is compatible with:
|
27 |
+
|
28 |
+
* OpenVINO version 2025.0.0 and higher
|
29 |
+
* Optimum Intel 1.21.0 and higher
|
30 |
+
|
31 |
+
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
|
32 |
+
|
33 |
+
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
|
34 |
+
|
35 |
+
```
|
36 |
+
pip install --pre -U --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/pre-release openvino_tokenizers openvino
|
37 |
+
|
38 |
+
pip install git+https://github.com/huggingface/optimum-intel.git
|
39 |
+
```
|
40 |
+
|
41 |
+
2. Run model inference
|
42 |
+
|
43 |
+
```
|
44 |
+
from PIL import Image
|
45 |
+
import requests
|
46 |
+
from optimum.intel.openvino import OVModelForVisualCausalLM
|
47 |
+
from transformers import AutoProcessor, TextStreamer
|
48 |
+
|
49 |
+
model_id = "OpenVINO/Phi-3.5-vision-instruct-fp16-ov"
|
50 |
+
|
51 |
+
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
|
52 |
+
|
53 |
+
ov_model = OVModelForVisualCausalLM.from_pretrained(model_id, trust_remote_code=True)
|
54 |
+
prompt = "<|image_1|>\nWhat is unusual on this picture?"
|
55 |
+
|
56 |
+
url = "https://github.com/openvinotoolkit/openvino_notebooks/assets/29454499/d5fbbd1a-d484-415c-88cb-9986625b7b11"
|
57 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
58 |
+
|
59 |
+
inputs = ov_model.preprocess_inputs(text=prompt, image=image, processor=processor)
|
60 |
+
|
61 |
+
generation_args = {
|
62 |
+
"max_new_tokens": 50,
|
63 |
+
"temperature": 0.0,
|
64 |
+
"do_sample": False,
|
65 |
+
"streamer": TextStreamer(processor.tokenizer, skip_prompt=True, skip_special_tokens=True)
|
66 |
+
}
|
67 |
+
|
68 |
+
generate_ids = ov_model.generate(**inputs,
|
69 |
+
eos_token_id=processor.tokenizer.eos_token_id,
|
70 |
+
**generation_args
|
71 |
+
)
|
72 |
+
|
73 |
+
generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:]
|
74 |
+
response = processor.batch_decode(generate_ids,
|
75 |
+
skip_special_tokens=True,
|
76 |
+
clean_up_tokenization_spaces=False)[0]
|
77 |
+
|
78 |
+
```
|
79 |
+
|
80 |
+
## Limitations
|
81 |
+
|
82 |
+
|
83 |
+
Check the original [model card](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) for limitations.
|
84 |
+
|
85 |
+
## Legal information
|
86 |
+
|
87 |
+
The original model is distributed under [MIT](https://huggingface.co/microsoft/Phi-3.5-vision-instruct/blob/main/LICENSE) license. More details can be found in [original model card](https://huggingface.co/microsoft/Phi-3.5-vision-instruct).
|