mabrouk commited on
Commit
4ea6d25
1 Parent(s): 87901b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -6,7 +6,7 @@ The BART model was pre-trained on the CNN-DailyMail dataset, but it was re-train
6
 
7
  According to huggingface, BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
8
 
9
- BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs.
10
 
11
  ## Intended uses & limitations
12
 
 
6
 
7
  According to huggingface, BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
8
 
9
+ BART is particularly effective when fine-tuned for summarization on the Amazon Review data, which hosts a large collection of reviews.
10
 
11
  ## Intended uses & limitations
12