luluw commited on
Commit
8a81b0d
1 Parent(s): f5cc056

End of Training

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: facebook/bart-large-cnn
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: bart-large-cnn-finetuned
11
+ results: []
12
+ datasets:
13
+ - FiscalNote/billsum
14
+ pipeline_tag: summarization
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # bart-large-finetuned-billsum
21
+
22
+ This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on a [FiscalNote/Billsum](https://huggingface.co/datasets/FiscalNote/billsum) dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 1.1553
25
+ - Rouge1: 51.9605
26
+ - Rouge2: 36.2784
27
+ - Rougel: 44.1511
28
+ - Rougelsum: 47.1043
29
+ - Gen Len: 63.9903
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 2e-05
45
+ - train_batch_size: 8
46
+ - eval_batch_size: 8
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - lr_scheduler_warmup_steps: 1000
51
+ - num_epochs: 3
52
+ - mixed_precision_training: Native AMP
53
+
54
+ ### Training results
55
+
56
+ | Train Loss | Step | Val Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
57
+ |:-------------:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
58
+ | 1.4735 | 1000 | 1.3306 | 50.6543 | 33.9684 | 42.2550 | 45.4452 | 63.9983 |
59
+ | 1.3146 | 2000 | 1.2376 | 51.0888 | 34.9554 | 42.9847 | 45.8933 | 63.9903 |
60
+ | 1.1542 | 3000 | 1.1874 | 51.5755 | 35.6875 | 43.6806 | 46.5762 | 63.9800 |
61
+ | 1.0917 | 4000 | 1.1714 | 51.8612 | 36.1809 | 44.0608 | 47.0279 | 63.9870 |
62
+ | 1.0380 | 5000 | 1.1553 | 51.9605 | 36.2784 | 44.1511 | 47.1043 | 63.9903 |
63
+
64
+ ```python
65
+ from transformers import pipeline
66
+
67
+ summarizer = pipeline("summarization", model="bart-large-cnn-finetuned")
68
+
69
+ text = """
70
+ The paper "Attention is All You Need" revolutionized the field of natural language processing (NLP) by introducing the Transformer architecture, which relies solely on attention mechanisms to model long-range dependencies in sequential data. Prior to this, models like recurrent neural networks (RNNs) and convolutional neural networks (CNNs) were the primary tools for sequence modeling, but they suffered from limitations such as difficulty in parallelization and the vanishing gradient problem. The Transformer, however, breaks free from these constraints by using a self-attention mechanism, which allows it to attend to different parts of a sequence simultaneously, leading to more efficient training and better performance on tasks such as machine translation, text summarization, and language modeling.
71
+ The core innovation of the Transformer model lies in its multi-head self-attention mechanism. Unlike RNNs that process sequences step-by-step, the Transformer processes the entire sequence at once by applying self-attention to every word or token. This allows each token to weigh the relevance of other tokens in the sequence, giving the model a global understanding of context. Multi-head attention refers to applying multiple attention layers in parallel, enabling the model to focus on different parts of the input sequence simultaneously. This enhances the model's ability to capture various relationships and nuances in the data.
72
+ The Transformer consists of an encoder-decoder structure. The encoder takes in the input sequence, computes self-attention to understand relationships between tokens, and generates a context-aware representation. The decoder, which also incorporates self-attention, generates the output sequence one token at a time by attending to both the previously generated tokens and the encoder's output. This architecture, coupled with position-wise feed-forward networks and layer normalization, makes the Transformer highly scalable and efficient.
73
+ Another significant contribution of the paper is the introduction of positional encoding. Since the Transformer lacks the inherent sequential nature of RNNs, it cannot infer the order of tokens from the architecture itself. To overcome this, the authors introduced positional encodings, which are added to the input embeddings to provide the model with information about the relative position of tokens. These encodings allow the model to maintain a sense of order in the data without explicitly processing tokens sequentially.
74
+ The original Transformer model proposed in Attention is All You Need had six layers each in both the encoder and decoder. Each layer consists of multi-head attention and feed-forward layers, with residual connections and normalization. The model was trained using the Adam optimizer and applied to machine translation tasks, where it demonstrated state-of-the-art performance, surpassing previous models like LSTMs and GRUs.
75
+ One of the key benefits of the Transformer is its ability to parallelize training, as it does not rely on sequential data processing like RNNs. This parallelism allows it to leverage modern GPU architectures effectively, leading to faster training times and the ability to scale to much larger datasets. Furthermore, Transformers handle long-range dependencies better than previous models because self-attention allows every token to interact with every other token in the sequence, regardless of their distance from each other.
76
+ """
77
+
78
+ print(summarizer(text, max_new_tokens=128)[0]['generated_text'])
79
+ >> Attention is All You Need is a paper that revolutionized the field of natural language processing (NLP) by introducing the Transformer architecture, which relies solely on attention mechanisms to model long-range dependencies in sequential data. The Transformer consists of an encoder-decoder structure: the encoder takes in the input sequence, computes self-attention to understand relationships between tokens, and generates a context-aware representation; and the decoder generates the output sequence one token at a time by attending to both the previously generated tokens and encoder output.
80
+ ```
81
+
82
+ ### Framework versions
83
+
84
+ - Transformers 4.44.2
85
+ - Pytorch 2.2.1+cu121
86
+ - Datasets 2.21.0
87
+ - Tokenizers 0.19.1