Thalesian commited on
Commit
abf3997
1 Parent(s): 5c8f477

akk-en-mt5-base

Browse files
.DS_Store ADDED
Binary file (6.15 kB). View file
 
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/mt5-base
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: mt5-base-p-l-akk-en-20240709-215100
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # mt5-base-p-l-akk-en-20240709-215100
15
+
16
+ This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.1533
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 4e-05
38
+ - train_batch_size: 12
39
+ - eval_batch_size: 12
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 10
44
+
45
+ ### Training results
46
+
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:------:|:-----:|:---------------:|
49
+ | 30.893 | 0.1326 | 500 | 5.1945 |
50
+ | 2.9823 | 0.2651 | 1000 | 0.6672 |
51
+ | 0.6668 | 0.3977 | 1500 | 0.5089 |
52
+ | 0.4944 | 0.5302 | 2000 | 0.3100 |
53
+ | 0.3002 | 0.6628 | 2500 | 0.2663 |
54
+ | 0.2813 | 0.7953 | 3000 | 0.2493 |
55
+ | 0.273 | 0.9279 | 3500 | 0.2369 |
56
+ | 0.2544 | 1.0604 | 4000 | 0.2304 |
57
+ | 0.2445 | 1.1930 | 4500 | 0.2241 |
58
+ | 0.2365 | 1.3256 | 5000 | 0.2190 |
59
+ | 0.2305 | 1.4581 | 5500 | 0.2140 |
60
+ | 0.2318 | 1.5907 | 6000 | 0.2108 |
61
+ | 0.2166 | 1.7232 | 6500 | 0.2060 |
62
+ | 0.2195 | 1.8558 | 7000 | 0.2029 |
63
+ | 0.2125 | 1.9883 | 7500 | 0.2000 |
64
+ | 0.2091 | 2.1209 | 8000 | 0.1963 |
65
+ | 0.2092 | 2.2534 | 8500 | 0.1938 |
66
+ | 0.2032 | 2.3860 | 9000 | 0.1915 |
67
+ | 0.2018 | 2.5186 | 9500 | 0.1892 |
68
+ | 0.2017 | 2.6511 | 10000 | 0.1870 |
69
+ | 0.1961 | 2.7837 | 10500 | 0.1855 |
70
+ | 0.2009 | 2.9162 | 11000 | 0.1841 |
71
+ | 0.1956 | 3.0488 | 11500 | 0.1828 |
72
+ | 0.1915 | 3.1813 | 12000 | 0.1807 |
73
+ | 0.1892 | 3.3139 | 12500 | 0.1790 |
74
+ | 0.1908 | 3.4464 | 13000 | 0.1773 |
75
+ | 0.1834 | 3.5790 | 13500 | 0.1763 |
76
+ | 0.1832 | 3.7116 | 14000 | 0.1744 |
77
+ | 0.189 | 3.8441 | 14500 | 0.1734 |
78
+ | 0.1848 | 3.9767 | 15000 | 0.1724 |
79
+ | 0.1838 | 4.1092 | 15500 | 0.1715 |
80
+ | 0.177 | 4.2418 | 16000 | 0.1703 |
81
+ | 0.1808 | 4.3743 | 16500 | 0.1692 |
82
+ | 0.183 | 4.5069 | 17000 | 0.1680 |
83
+ | 0.1753 | 4.6394 | 17500 | 0.1675 |
84
+ | 0.1724 | 4.7720 | 18000 | 0.1666 |
85
+ | 0.1782 | 4.9046 | 18500 | 0.1656 |
86
+ | 0.1799 | 5.0371 | 19000 | 0.1653 |
87
+ | 0.1725 | 5.1697 | 19500 | 0.1647 |
88
+ | 0.17 | 5.3022 | 20000 | 0.1635 |
89
+ | 0.1722 | 5.4348 | 20500 | 0.1630 |
90
+ | 0.1697 | 5.5673 | 21000 | 0.1625 |
91
+ | 0.1719 | 5.6999 | 21500 | 0.1620 |
92
+ | 0.1709 | 5.8324 | 22000 | 0.1611 |
93
+ | 0.1727 | 5.9650 | 22500 | 0.1604 |
94
+ | 0.1721 | 6.0976 | 23000 | 0.1598 |
95
+ | 0.1681 | 6.2301 | 23500 | 0.1602 |
96
+ | 0.1699 | 6.3627 | 24000 | 0.1596 |
97
+ | 0.1639 | 6.4952 | 24500 | 0.1588 |
98
+ | 0.1646 | 6.6278 | 25000 | 0.1584 |
99
+ | 0.1691 | 6.7603 | 25500 | 0.1582 |
100
+ | 0.1653 | 6.8929 | 26000 | 0.1574 |
101
+ | 0.1648 | 7.0255 | 26500 | 0.1572 |
102
+ | 0.1669 | 7.1580 | 27000 | 0.1569 |
103
+ | 0.16 | 7.2906 | 27500 | 0.1568 |
104
+ | 0.1622 | 7.4231 | 28000 | 0.1562 |
105
+ | 0.1644 | 7.5557 | 28500 | 0.1561 |
106
+ | 0.1674 | 7.6882 | 29000 | 0.1557 |
107
+ | 0.1628 | 7.8208 | 29500 | 0.1552 |
108
+ | 0.1619 | 7.9533 | 30000 | 0.1551 |
109
+ | 0.1636 | 8.0859 | 30500 | 0.1549 |
110
+ | 0.1629 | 8.2185 | 31000 | 0.1546 |
111
+ | 0.1632 | 8.3510 | 31500 | 0.1545 |
112
+ | 0.1641 | 8.4836 | 32000 | 0.1543 |
113
+ | 0.1592 | 8.6161 | 32500 | 0.1541 |
114
+ | 0.1573 | 8.7487 | 33000 | 0.1539 |
115
+ | 0.1607 | 8.8812 | 33500 | 0.1540 |
116
+ | 0.1651 | 9.0138 | 34000 | 0.1537 |
117
+ | 0.1551 | 9.1463 | 34500 | 0.1537 |
118
+ | 0.1621 | 9.2789 | 35000 | 0.1536 |
119
+ | 0.166 | 9.4115 | 35500 | 0.1534 |
120
+ | 0.1575 | 9.5440 | 36000 | 0.1534 |
121
+ | 0.1607 | 9.6766 | 36500 | 0.1534 |
122
+ | 0.1627 | 9.8091 | 37000 | 0.1533 |
123
+ | 0.1608 | 9.9417 | 37500 | 0.1533 |
124
+
125
+
126
+ ### Framework versions
127
+
128
+ - Transformers 4.41.2
129
+ - Pytorch 2.5.0.dev20240625
130
+ - Datasets 2.20.0
131
+ - Tokenizers 0.19.1
added_tokens.json ADDED
The diff for this file is too large to render. See raw diff
 
config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/mt5-base",
3
+ "architectures": [
4
+ "MT5ForConditionalGeneration"
5
+ ],
6
+ "classifier_dropout": 0.0,
7
+ "d_ff": 2048,
8
+ "d_kv": 64,
9
+ "d_model": 768,
10
+ "decoder_start_token_id": 0,
11
+ "dense_act_fn": "gelu_new",
12
+ "dropout_rate": 0.1,
13
+ "eos_token_id": 1,
14
+ "feed_forward_proj": "gated-gelu",
15
+ "initializer_factor": 1.0,
16
+ "is_encoder_decoder": true,
17
+ "is_gated_act": true,
18
+ "layer_norm_epsilon": 1e-06,
19
+ "model_type": "mt5",
20
+ "num_decoder_layers": 12,
21
+ "num_heads": 12,
22
+ "num_layers": 12,
23
+ "output_past": true,
24
+ "pad_token_id": 0,
25
+ "relative_attention_max_distance": 128,
26
+ "relative_attention_num_buckets": 32,
27
+ "tie_word_embeddings": false,
28
+ "tokenizer_class": "T5Tokenizer",
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.41.2",
31
+ "use_cache": true,
32
+ "vocab_size": 264277
33
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "decoder_start_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.41.2"
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08cfd87e1b4db110ebbf2131c6ca5cb1a84cb5ec361df21a0a85bacbd086b554
3
+ size 2416668528
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eos_token": {
3
+ "content": "</s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "pad_token": {
10
+ "content": "<pad>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef78f86560d809067d12bac6c09f19a462cb3af3f54d2b8acbba26e1433125d6
3
+ size 4309802
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a7ecb7d13cbc768a6a30788a9ea473494ed30474747c85b46856124147cf989
3
+ size 5304