File size: 2,096 Bytes
10c7965 8364010 10c7965 f116497 8364010 f116497 8364010 34373e1 8364010 cfeea93 8364010 5c119e9 8364010 34373e1 8364010 34373e1 8364010 34373e1 8364010 7443e40 8364010 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
datasets:
- jxu124/invig
language:
- en
---
## TiO - An Interactive Visual Grounding Model for Disambiguation.
TiO is an Interactive Visual Grounding Model for Disambiguation. (WIP)
## Mini-Example
```python
from transformers import AutoModel, AutoTokenizer, AutoImageProcessor
from PIL import Image
from io import BytesIO
import torch
import requests
# Load model, tokenizer, image_processor
tokenizer = AutoTokenizer.from_pretrained("jxu124/TiO", use_fast=False)
image_processor = AutoImageProcessor.from_pretrained("jxu124/TiO")
model = AutoModel.from_pretrained("jxu124/TiO", trust_remote_code=True)
model = model.to(torch.float16).cuda() # It will be faster when using float16.
# Prepare example
image = Image.open(BytesIO(requests.get("http://images.cocodataset.org/val2014/COCO_val2014_000000429913.jpg").content))
text = " #instruction: guess what i want? \n #context: \"human: look that man in white! \""
# Inference
with torch.no_grad():
pt_txt = tokenizer([text], return_tensors="pt").input_ids.cuda()
pt_img = image_processor([image], return_tensors="pt").pixel_values.to(torch.float16).cuda()
gen = model.generate(pt_txt, patch_images=pt_img, top_p=0.5, do_sample=True, no_repeat_ngram_size=3, max_length=256)
print(tokenizer.batch_decode(gen, skip_special_tokens=True))
# e.g. [' is he the one who just threw the ball?'] # Due to the generator, different results may be output
```
## Other Examples (text)
Guesser(grounding):
```python
text = """ #instruction: which region does the context describe? \n #context: \"\
human: look that man in white!
agent: is he the one who just threw the ball?
human: yes. I mean the pitcher.\"
"""
```
Questioner(question generation):
```python
text = """ #instruction: guess what I want? \n #context: \"\
human: look that man in white! \"
"""
```
Oracle(answering):
```python
text = """ #instruction: answer the question based on the region. \n #context: \"\
agent: look that man in white!
human: is he the one who just threw the ball? \"
#region: <bin_847> <bin_319> <bin_923> <bin_467>
"""
``` |