internetoftim
commited on
Commit
•
4287c49
1
Parent(s):
2a68191
Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model c
|
|
34 |
|
35 |
## Model description
|
36 |
|
37 |
-
LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language
|
38 |
|
39 |
Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
|
40 |
|
|
|
34 |
|
35 |
## Model description
|
36 |
|
37 |
+
LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modelling, visual-language text alignment, ROI-feature regression, masked visual-attribute modelling, masked visual-object modelling, and visual-question answering objectives. It achieves the state-of-the-art results on VQA and GQA.
|
38 |
|
39 |
Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
|
40 |
|