Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ The BART model was pre-trained on the CNN-DailyMail dataset, but it was re-train
|
|
6 |
|
7 |
According to huggingface, BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
|
8 |
|
9 |
-
BART is particularly effective when fine-tuned for
|
10 |
|
11 |
## Intended uses & limitations
|
12 |
|
|
|
6 |
|
7 |
According to huggingface, BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
|
8 |
|
9 |
+
BART is particularly effective when fine-tuned for summarization on the Amazon Review data, which hosts a large collection of reviews.
|
10 |
|
11 |
## Intended uses & limitations
|
12 |
|