Transformers
Inference Endpoints
kimihailv commited on
Commit
81e773d
1 Parent(s): acfd37b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +128 -0
README.md ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - de
6
+ - es
7
+ - fr
8
+ - it
9
+ - ja
10
+ - ko
11
+ - pl
12
+ - ru
13
+ - tr
14
+ - zh
15
+ ---
16
+ ---
17
+ <h1 align="center">UForm</h1>
18
+ <h3 align="center">
19
+ Multi-Modal Inference Library<br/>
20
+ For Semantic Search Applications<br/>
21
+ </h3>
22
+
23
+ ---
24
+
25
+ UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space!
26
+ It extends the `transfromers` package to support Mid-fusion Models.
27
+
28
+ This is model card of the __Multilingual model__ with:
29
+
30
+ * 12 layers BERT (8 layers for unimodal encoding and rest layers for multimodal encoding)
31
+ * ViT-B/16 (image resolution is 224x224)
32
+
33
+ The model was trained on balanced multilingual dataset.
34
+
35
+ If you need English model, check [this](https://huggingface.co/unum-cloud/uform-vl-english).
36
+
37
+
38
+ ## Installation
39
+
40
+ ```bash
41
+ pip install uform
42
+ ```
43
+
44
+ ## Usage
45
+
46
+ To load the model:
47
+
48
+ ```python
49
+ import uform
50
+
51
+ model = uform.get_model('unum-cloud/uform-vl-english')
52
+ ```
53
+
54
+ To encode data:
55
+
56
+ ```python
57
+ from PIL import Image
58
+
59
+ text = 'a small red panda in a zoo'
60
+ image = Image.open('red_panda.jpg')
61
+
62
+ image_data = model.preprocess_image(image)
63
+ text_data = model.preprocess_text(text)
64
+
65
+ image_embedding = model.encode_image(image_data)
66
+ text_embedding = model.encode_text(text_data)
67
+ joint_embedding = model.encode_multimodal(image=image_data, text=text_data)
68
+ ```
69
+
70
+ To get features:
71
+
72
+ ```python
73
+ image_features, image_embedding = model.encode_image(image_data, return_features=True)
74
+ text_features, text_embedding = model.encode_text(text_data, return_features=True)
75
+ ```
76
+
77
+ These features can later be used to produce joint multimodal encodings faster, as the first layers of the transformer can be skipped:
78
+
79
+ ```python
80
+ joint_embedding = model.encode_multimodal(
81
+ image_features=image_features,
82
+ text_features=text_features,
83
+ attention_mask=text_data['attention_mask']
84
+ )
85
+ ```
86
+
87
+ There are two options to calculate semantic compatibility between an image and a text: [Cosine Similarity](#cosine-similarity) and [Matching Score](#matching-score).
88
+
89
+ ### Cosine Similarity
90
+
91
+ ```python
92
+ import torch.nn.functional as F
93
+
94
+ similarity = F.cosine_similarity(image_embedding, text_embedding)
95
+ ```
96
+
97
+ The `similarity` will belong to the `[-1, 1]` range, `1` meaning the absolute match.
98
+
99
+ __Pros__:
100
+
101
+ - Computationally cheap.
102
+ - Only unimodal embeddings are required, unimodal encoding is faster than joint encoding.
103
+ - Suitable for retrieval in large collections.
104
+
105
+ __Cons__:
106
+
107
+ - Takes into account only coarse-grained features.
108
+
109
+
110
+ ### Matching Score
111
+
112
+ Unlike cosine similarity, unimodal embedding are not enough.
113
+ Joint embedding will be needed and the resulting `score` will belong to the `[0, 1]` range, `1` meaning the absolute match.
114
+
115
+ ```python
116
+ score = model.get_matching_scores(joint_embedding)
117
+ ```
118
+
119
+ __Pros__:
120
+
121
+ - Joint embedding captures fine-grained features.
122
+ - Suitable for re-ranking - sorting retrieval result.
123
+
124
+ __Cons__:
125
+
126
+ - Resource-intensive.
127
+ - Not suitable for retrieval in large collections.
128
+