File size: 2,322 Bytes
d5be57f 6a5f217 5c28fa4 6a5f217 5c28fa4 6a5f217 5c28fa4 6a5f217 5c28fa4 f5347fd 5c28fa4 e03a2b9 5c28fa4 6a5f217 5c28fa4 f5347fd 5c28fa4 f5347fd 5c28fa4 6a5f217 5c28fa4 6a5f217 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
library_name: transformers
pipeline_tag: image-text-to-text
---
Ferret-UI is the first UI-centric multimodal large language model (MLLM) designed for referring, grounding, and reasoning tasks.
Built on Gemma-2B and Llama-3-8B, it is capable of executing complex UI tasks.
This is the **Llama-3-8B** version of ferret-ui. It follows from [this paper](https://arxiv.org/pdf/2404.05719) by Apple.
## How to Use 🤗📱
You will need first to download `builder.py`, `conversation.py`, `inference.py`, `model_UI.py`, and `mm_utils.py` locally.
```bash
wget https://huggingface.co./jadechoghari/Ferret-UI-Gemma2b/raw/main/conversation.py
wget https://huggingface.co./jadechoghari/Ferret-UI-Gemma2b/raw/main/builder.py
wget https://huggingface.co./jadechoghari/Ferret-UI-Gemma2b/raw/main/inference.py
wget https://huggingface.co./jadechoghari/Ferret-UI-Gemma2b/raw/main/model_UI.py
wget https://huggingface.co./jadechoghari/Ferret-UI-Gemma2b/raw/main/mm_utils.py
```
### Usage:
```python
from inference import inference_and_run
image_path = "appstore_reminders.png"
prompt = "Describe the image in details"
# Call the function without a box
processed_image, inference_text = inference_and_run(image_path, prompt)
# Output processed text
print("Inference Text:", inference_text)
```
```python
# Task with bounding boxes
image_path = "appstore_reminders.png"
prompt = "What's inside the selected region?"
box = [189, 906, 404, 970]
processed_image, inference_text = inference_and_run(
image_path=image_path,
prompt=prompt,
conv_mode="ferret_llama_3",
model_path="jadechoghari/Ferret-UI-Llama8b",
box=box
)
# output the inference text and optionally save the processed image
print("Inference Text:", inference_text)
```
```python
# GROUNDING PROMPTS
GROUNDING_TEMPLATES = [
'\nProvide the bounding boxes of the mentioned objects.',
'\nInclude the coordinates for each mentioned object.',
'\nLocate the objects with their coordinates.',
'\nAnswer in [x1, y1, x2, y2] format.',
'\nMention the objects and their locations using the format [x1, y1, x2, y2].',
'\nDraw boxes around the mentioned objects.',
'\nUse boxes to show where each thing is.',
'\nTell me where the objects are with coordinates.',
'\nList where each object is with boxes.',
'\nShow me the regions with boxes.'
]
``` |