gabrielmotablima
commited on
Commit
•
d6e1557
1
Parent(s):
074b040
update readme
Browse files
README.md
CHANGED
@@ -16,16 +16,17 @@ pipeline_tag: text-generation
|
|
16 |
|
17 |
# Swin-DistilBERTimbau
|
18 |
|
19 |
-
**Swin-DistilBERTimbau** model trained on **Flickr30K Portuguese** (translated version using Google Translator API)
|
20 |
at resolution 224x224 and max sequence length of 512 tokens.
|
21 |
|
22 |
|
23 |
## Model Description
|
24 |
|
25 |
-
The Swin-DistilBERTimbau is a type of Vision Encoder Decoder which leverage the checkpoints of the [Swin
|
26 |
as encoder and the checkpoints of the [DistilBERTimbau](https://huggingface.co/adalbertojunior/distilbert-portuguese-cased) as decoder.
|
27 |
The encoder checkpoints come from Swin Trasnformer version pre-trained on ImageNet-1k at resolution 224x224.
|
28 |
|
|
|
29 |
|
30 |
## How to Get Started with the Model
|
31 |
|
@@ -53,16 +54,17 @@ generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
|
|
53 |
print(generated_text)
|
54 |
```
|
55 |
|
|
|
56 |
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
|
61 |
|Model|Training|Evaluation|C|B@4|RL|M|BS|
|
62 |
|-----|--------|----------|-------|------|-------|------|---------|
|
63 |
|Swin-DistilBERTimbau|Flickr30K Portuguese|Flickr30K Portuguese|66.73|24.65|39.98|44.71|72.30|
|
64 |
|Swin-GPT-2|Flickr30K Portuguese|Flickr30K Portuguese|64.71|23.15|39.39|44.36|71.70|
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
|
|
|
|
|
16 |
|
17 |
# Swin-DistilBERTimbau
|
18 |
|
19 |
+
**Swin-DistilBERTimbau** model trained on [**Flickr30K Portuguese**](https://huggingface.co/datasets/laicsiifes/flickr30k-pt-br) (translated version using Google Translator API)
|
20 |
at resolution 224x224 and max sequence length of 512 tokens.
|
21 |
|
22 |
|
23 |
## Model Description
|
24 |
|
25 |
+
The Swin-DistilBERTimbau is a type of Vision Encoder Decoder which leverage the checkpoints of the [Swin Transformer](https://huggingface.co/microsoft/swin-base-patch4-window7-224)
|
26 |
as encoder and the checkpoints of the [DistilBERTimbau](https://huggingface.co/adalbertojunior/distilbert-portuguese-cased) as decoder.
|
27 |
The encoder checkpoints come from Swin Trasnformer version pre-trained on ImageNet-1k at resolution 224x224.
|
28 |
|
29 |
+
The code used for training and evaluation is available at: https://github.com/laicsiifes/ved-transformer-caption-ptbr.
|
30 |
|
31 |
## How to Get Started with the Model
|
32 |
|
|
|
54 |
print(generated_text)
|
55 |
```
|
56 |
|
57 |
+
## Results
|
58 |
|
59 |
+
The evaluation metrics Cider-D, BLEU@4, ROUGE-L, METEOR and BERTScore are abbreviated as C, B@4, RL, M and BS, respectively.
|
|
|
|
|
60 |
|
61 |
|Model|Training|Evaluation|C|B@4|RL|M|BS|
|
62 |
|-----|--------|----------|-------|------|-------|------|---------|
|
63 |
|Swin-DistilBERTimbau|Flickr30K Portuguese|Flickr30K Portuguese|66.73|24.65|39.98|44.71|72.30|
|
64 |
|Swin-GPT-2|Flickr30K Portuguese|Flickr30K Portuguese|64.71|23.15|39.39|44.36|71.70|
|
65 |
|
66 |
+
## BibTeX entry and citation info
|
67 |
|
68 |
+
```bibtex
|
69 |
+
Coming Soon
|
70 |
+
```
|