ybelkada commited on
Commit
3c39097
·
1 Parent(s): 89ddc73

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +171 -0
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - ro
6
+ - de
7
+ - multilingual
8
+ pipeline_tag: image-to-text
9
+ tags:
10
+ - image-captioning
11
+ license: apache-2.0
12
+ ---
13
+
14
+
15
+ # Model card for Pix2Struct - Finetuned on TextCaps - Large version
16
+
17
+ ![model_image](https://s3.amazonaws.com/moonup/production/uploads/1678713353867-62441d1d9fdefb55a0b7d12c.png)
18
+
19
+ # Table of Contents
20
+
21
+ 0. [TL;DR](#TL;DR)
22
+ 1. [Using the model](#using-the-model)
23
+ 2. [Results](#results)
24
+ 3. [Contribution](#contribution)
25
+ 4. [Citation](#citation)
26
+
27
+ # TL;DR
28
+
29
+ Pix2Struct is an image encoder - text decoder model that is trained on image-text pairs for various tasks, including image captionning and visual question answering. The full list of available models can be found on the Table 1 of the paper:
30
+
31
+ ![Table 1 - paper](https://s3.amazonaws.com/moonup/production/uploads/1678712985040-62441d1d9fdefb55a0b7d12c.png)
32
+
33
+
34
+ The abstract of the model states that:
35
+ > Visually-situated language is ubiquitous—sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and
36
+ forms. Perhaps due to this diversity, previous work has typically relied on domainspecific recipes with limited sharing of the underlying data, model architectures,
37
+ and objectives. We present Pix2Struct, a pretrained image-to-text model for
38
+ purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse
39
+ masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large
40
+ source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy,
41
+ we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions
42
+ are rendered directly on top of the input image. For the first time, we show that a
43
+ single pretrained model can achieve state-of-the-art results in six out of nine tasks
44
+ across four domains: documents, illustrations, user interfaces, and natural images.
45
+
46
+ # Using the model
47
+
48
+ ## Converting from T5x to huggingface
49
+
50
+ You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_checkpoint_to_pytorch.py) script as follows:
51
+ ```bash
52
+ python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE
53
+ ```
54
+ if you are converting a large model, run:
55
+ ```bash
56
+ python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large
57
+ ```
58
+ Once saved, you can push your converted model with the following snippet:
59
+ ```python
60
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
61
+
62
+ model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
63
+ processor = Pix2StructProcessor.from_pretrained(Pix2StructForConditionalGeneration)
64
+
65
+ model.push_to_hub("USERNAME/MODEL_NAME")
66
+ processor.push_to_hub("USERNAME/MODEL_NAME")
67
+ ```
68
+
69
+ ## Running the model
70
+
71
+ ### In full precision, on CPU:
72
+
73
+ You can run the model in full precision on CPU:
74
+ ```python
75
+ import requests
76
+ from PIL import Image
77
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
78
+
79
+ url = "https://www.ilankelman.org/stopsigns/australia.jpg"
80
+ image = Image.open(requests.get(url, stream=True).raw)
81
+
82
+ model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base")
83
+ processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base")
84
+
85
+ # image only
86
+ inputs = processor(images=image, return_tensors="pt")
87
+
88
+ predictions = model.generate(**inputs)
89
+ print(processor.decode(predictions[0], skip_special_tokens=True))
90
+ >>> A street scene with a sign that says "STOP".
91
+ ```
92
+
93
+ ### In full precision, on GPU:
94
+
95
+ You can run the model in full precision on CPU:
96
+ ```python
97
+ import requests
98
+ from PIL import Image
99
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
100
+
101
+ url = "https://www.ilankelman.org/stopsigns/australia.jpg"
102
+ image = Image.open(requests.get(url, stream=True).raw)
103
+
104
+ model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-large").to("cuda")
105
+ processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-large")
106
+
107
+ # image only
108
+ inputs = processor(images=image, return_tensors="pt").to("cuda")
109
+
110
+ predictions = model.generate(**inputs)
111
+ print(processor.decode(predictions[0], skip_special_tokens=True))
112
+ >>> A street scene with a sign that says "STOP".
113
+ ```
114
+
115
+ ### In half precision, on GPU:
116
+
117
+ You can run the model in full precision on CPU:
118
+ ```python
119
+ import requests
120
+ import torch
121
+
122
+ from PIL import Image
123
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
124
+
125
+ url = "https://www.ilankelman.org/stopsigns/australia.jpg"
126
+ image = Image.open(requests.get(url, stream=True).raw)
127
+
128
+ model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-large", torch_dtype=torch.bfloat16).to("cuda")
129
+ processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-large")
130
+
131
+ # image only
132
+ inputs = processor(images=image, return_tensors="pt").to("cuda", torch.bfloat16)
133
+
134
+ predictions = model.generate(**inputs)
135
+ print(processor.decode(predictions[0], skip_special_tokens=True))
136
+ >>> A street scene with a sign that says "STOP".
137
+ ```
138
+
139
+ ### Use different sequence length
140
+
141
+ This model has been trained on a sequence length of `4096`. You can try to reduce the sequence length for a more memory efficient inference but you may observe some performance degradation for small sequence length (<1024). Just pass `max_patches` when calling the processor:
142
+ ```python
143
+ inputs = processor(images=image, return_tensors="pt", max_patches=1024)
144
+ ```
145
+
146
+ ### Conditional generation
147
+
148
+ You can also pre-pend some input text to perform conditional generation:
149
+
150
+ ```python
151
+ import requests
152
+ from PIL import Image
153
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
154
+
155
+ url = "https://www.ilankelman.org/stopsigns/australia.jpg"
156
+ image = Image.open(requests.get(url, stream=True).raw)
157
+ text = "A picture of"
158
+
159
+ model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-large")
160
+ processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-large")
161
+
162
+ # image only
163
+ inputs = processor(images=image, text=text, return_tensors="pt")
164
+
165
+ predictions = model.generate(**inputs)
166
+ print(processor.decode(predictions[0], skip_special_tokens=True))
167
+ ```
168
+
169
+ # Contribution
170
+
171
+ This model was originally contributed by Kenton Lee, Mandar Joshi et al. and added to the Hugging Face ecosystem by [Younes Belkada](https://huggingface.co/ybelkada).