Amalq commited on
Commit
3915718
·
1 Parent(s): dc6b5a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -22
README.md CHANGED
@@ -1,24 +1,3 @@
1
- ---
2
- license: apache-2.0
3
- datasets:
4
- - shared_TaskA
5
- model-index:
6
- - name: flan_t5_large_chat_summary
7
- results:
8
- - task:
9
- name: Sequence-to-sequence Language Modeling
10
- type: text2text-generation
11
- dataset:
12
- name: shared_TaskA
13
- type: shared_TaskA
14
- config: shared_TaskA
15
- split: train
16
- args: samsum
17
- metrics:
18
- - name: Rouge1
19
- type: rouge
20
- value: 28.1748
21
- ---
22
 
23
 
24
 
@@ -41,4 +20,12 @@ The following hyperparameters were used during training:
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - num_epochs: 5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
 
3
 
 
20
  - seed: 42
21
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
22
  - lr_scheduler_type: linear
23
+ - num_epochs: 5
24
+
25
+ ### Example Uses
26
+
27
+ ```python
28
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
29
+ tokenizer_pre = AutoTokenizer.from_pretrained("Amalq/flan_t5_large_chat_summary")
30
+ model_pre = AutoModelForSeq2SeqLM.from_pretrained("Amalq/flan_t5_large_chat_summary")
31
+ ```