Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ This is the 8-bit quantized version of Facebook's mbart model.
|
|
4 |
|
5 |
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text.
|
6 |
|
7 |
-
|
8 |
|
9 |
## Usage info
|
10 |
|
|
|
4 |
|
5 |
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text.
|
6 |
|
7 |
+
The Authors’ code can be found [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mbart)
|
8 |
|
9 |
## Usage info
|
10 |
|