mtasic85 commited on
Commit
5c18ba8
1 Parent(s): 7ce7585
Files changed (2) hide show
  1. README.md +146 -1
  2. scripts/TRAIN.md +4 -2
README.md CHANGED
@@ -26,6 +26,12 @@ tags:
26
 
27
  ![logo](./misc/logo.png)
28
 
 
 
 
 
 
 
29
  [loss, val_loss](https://api.wandb.ai/links/mtasic85/ecf4l9qp)
30
 
31
  [val_ppl](https://api.wandb.ai/links/mtasic85/qsn8mz13)
@@ -34,4 +40,143 @@ tags:
34
 
35
  [learning_rate](https://api.wandb.ai/links/mtasic85/7kyopu4t)
36
 
37
- ## lm-evaluation-harness
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ![logo](./misc/logo.png)
28
 
29
+ A pretrained language model based on the Llama model with about **109M** parameters. This model has been trained on **9.7B** (`9,782,206,713`) tokens from more than **5.2M** (`5,285,575`) dataset rows.
30
+
31
+ This model **isn't** designed for immediate use but rather for Continued Pretraining and Finetuning on a downstream task. While it can handle a context length of up to **32K** (`32,768`) tokens, it was pretrained with sequences of **2K** (`2048`) tokens.
32
+
33
+ The objective is to streamline the cognitive or reasoning core, eliminating any redundant knowledge from the model.
34
+
35
  [loss, val_loss](https://api.wandb.ai/links/mtasic85/ecf4l9qp)
36
 
37
  [val_ppl](https://api.wandb.ai/links/mtasic85/qsn8mz13)
 
40
 
41
  [learning_rate](https://api.wandb.ai/links/mtasic85/7kyopu4t)
42
 
43
+ ## lm-evaluation-harness
44
+
45
+ ```bash
46
+ litgpt evaluate --tasks 'leaderboard' --out_dir 'evaluate-0/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
47
+ ```
48
+
49
+ | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
50
+ |-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------|
51
+ |leaderboard | N/A| | | | | | | |
52
+ | - leaderboard_bbh | N/A| | | | | | | |
53
+ | - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.5680|± |0.0314|
54
+ | - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.5294|± |0.0366|
55
+ | - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.1880|± |0.0248|
56
+ | - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.3240|± |0.0297|
57
+ | - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.4720|± |0.0316|
58
+ | - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.0280|± |0.0105|
59
+ | - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5160|± |0.0317|
60
+ | - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.1760|± |0.0241|
61
+ | - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.1360|± |0.0217|
62
+ | - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.3480|± |0.0302|
63
+ | - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.2280|± |0.0266|
64
+ | - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.4800|± |0.0317|
65
+ | - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.0640|± |0.0155|
66
+ | - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.2329|± |0.0351|
67
+ | - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.1240|± |0.0209|
68
+ | - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.2400|± |0.0271|
69
+ | - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.1560|± |0.0230|
70
+ | - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.4607|± |0.0375|
71
+ | - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.4560|± |0.0316|
72
+ | - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2320|± |0.0268|
73
+ | - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253|
74
+ | - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1520|± |0.0228|
75
+ | - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3160|± |0.0295|
76
+ | - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.5040|± |0.0317|
77
+ | - leaderboard_gpqa | N/A| | | | | | | |
78
+ | - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.1919|± |0.0281|
79
+ | - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2747|± |0.0191|
80
+ | - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2589|± |0.0207|
81
+ | - leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.2002|± | N/A|
82
+ | | |none | 0|inst_level_strict_acc |↑ |0.1871|± | N/A|
83
+ | | |none | 0|prompt_level_loose_acc |↑ |0.1072|± |0.0133|
84
+ | | |none | 0|prompt_level_strict_acc|↑ |0.0998|± |0.0129|
85
+ | - leaderboard_math_hard | N/A| | | | | | | |
86
+ | - leaderboard_math_algebra_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
87
+ | - leaderboard_math_counting_and_prob_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
88
+ | - leaderboard_math_geometry_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
89
+ | - leaderboard_math_intermediate_algebra_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
90
+ | - leaderboard_math_num_theory_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
91
+ | - leaderboard_math_prealgebra_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
92
+ | - leaderboard_math_precalculus_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
93
+ | - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.1096|± |0.0028|
94
+ | - leaderboard_musr | N/A| | | | | | | |
95
+ | - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.4800|± |0.0317|
96
+ | - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.2930|± |0.0285|
97
+ | - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3360|± |0.0299|
98
+
99
+ ```bash
100
+ litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --out_dir 'evaluate-1/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
101
+ ```
102
+
103
+ Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
104
+ |---------------------------------------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
105
+ |arc_challenge | 1|none | 0|acc |↑ |0.2082|± |0.0119|
106
+ | | |none | 0|acc_norm |↑ |0.2474|± |0.0126|
107
+ |gsm8k | 3|flexible-extract| 5|exact_match|↑ |0.0106|± |0.0028|
108
+ | | |strict-match | 5|exact_match|↑ |0.0008|± |0.0008|
109
+ |hellaswag | 1|none | 0|acc |↑ |0.2766|± |0.0045|
110
+ | | |none | 0|acc_norm |↑ |0.2926|± |0.0045|
111
+ |mmlu | 2|none | |acc |↑ |0.2349|± |0.0036|
112
+ | - humanities | 2|none | |acc |↑ |0.2461|± |0.0063|
113
+ | - formal_logic | 1|none | 0|acc |↑ |0.2698|± |0.0397|
114
+ | - high_school_european_history | 1|none | 0|acc |↑ |0.2000|± |0.0312|
115
+ | - high_school_us_history | 1|none | 0|acc |↑ |0.2549|± |0.0306|
116
+ | - high_school_world_history | 1|none | 0|acc |↑ |0.2616|± |0.0286|
117
+ | - international_law | 1|none | 0|acc |↑ |0.2479|± |0.0394|
118
+ | - jurisprudence | 1|none | 0|acc |↑ |0.2593|± |0.0424|
119
+ | - logical_fallacies | 1|none | 0|acc |↑ |0.2638|± |0.0346|
120
+ | - moral_disputes | 1|none | 0|acc |↑ |0.2457|± |0.0232|
121
+ | - moral_scenarios | 1|none | 0|acc |↑ |0.2458|± |0.0144|
122
+ | - philosophy | 1|none | 0|acc |↑ |0.1833|± |0.0220|
123
+ | - prehistory | 1|none | 0|acc |↑ |0.2315|± |0.0235|
124
+ | - professional_law | 1|none | 0|acc |↑ |0.2503|± |0.0111|
125
+ | - world_religions | 1|none | 0|acc |↑ |0.3216|± |0.0358|
126
+ | - other | 2|none | |acc |↑ |0.2391|± |0.0076|
127
+ | - business_ethics | 1|none | 0|acc |↑ |0.2900|± |0.0456|
128
+ | - clinical_knowledge | 1|none | 0|acc |↑ |0.2377|± |0.0262|
129
+ | - college_medicine | 1|none | 0|acc |↑ |0.2197|± |0.0316|
130
+ | - global_facts | 1|none | 0|acc |↑ |0.2100|± |0.0409|
131
+ | - human_aging | 1|none | 0|acc |↑ |0.2960|± |0.0306|
132
+ | - management | 1|none | 0|acc |↑ |0.1748|± |0.0376|
133
+ | - marketing | 1|none | 0|acc |↑ |0.2949|± |0.0299|
134
+ | - medical_genetics | 1|none | 0|acc |↑ |0.2700|± |0.0446|
135
+ | - miscellaneous | 1|none | 0|acc |↑ |0.2222|± |0.0149|
136
+ | - nutrition | 1|none | 0|acc |↑ |0.2092|± |0.0233|
137
+ | - professional_accounting | 1|none | 0|acc |↑ |0.2518|± |0.0259|
138
+ | - professional_medicine | 1|none | 0|acc |↑ |0.1949|± |0.0241|
139
+ | - virology | 1|none | 0|acc |↑ |0.3012|± |0.0357|
140
+ | - social sciences | 2|none | |acc |↑ |0.2246|± |0.0075|
141
+ | - econometrics | 1|none | 0|acc |↑ |0.2807|± |0.0423|
142
+ | - high_school_geography | 1|none | 0|acc |↑ |0.1818|± |0.0275|
143
+ | - high_school_government_and_politics| 1|none | 0|acc |↑ |0.2176|± |0.0298|
144
+ | - high_school_macroeconomics | 1|none | 0|acc |↑ |0.2179|± |0.0209|
145
+ | - high_school_microeconomics | 1|none | 0|acc |↑ |0.2101|± |0.0265|
146
+ | - high_school_psychology | 1|none | 0|acc |↑ |0.2000|± |0.0171|
147
+ | - human_sexuality | 1|none | 0|acc |↑ |0.2519|± |0.0381|
148
+ | - professional_psychology | 1|none | 0|acc |↑ |0.2516|± |0.0176|
149
+ | - public_relations | 1|none | 0|acc |↑ |0.2182|± |0.0396|
150
+ | - security_studies | 1|none | 0|acc |↑ |0.1959|± |0.0254|
151
+ | - sociology | 1|none | 0|acc |↑ |0.2488|± |0.0306|
152
+ | - us_foreign_policy | 1|none | 0|acc |↑ |0.2800|± |0.0451|
153
+ | - stem | 2|none | |acc |↑ |0.2239|± |0.0074|
154
+ | - abstract_algebra | 1|none | 0|acc |↑ |0.1800|± |0.0386|
155
+ | - anatomy | 1|none | 0|acc |↑ |0.1778|± |0.0330|
156
+ | - astronomy | 1|none | 0|acc |↑ |0.1974|± |0.0324|
157
+ | - college_biology | 1|none | 0|acc |↑ |0.2569|± |0.0365|
158
+ | - college_chemistry | 1|none | 0|acc |↑ |0.2400|± |0.0429|
159
+ | - college_computer_science | 1|none | 0|acc |↑ |0.2400|± |0.0429|
160
+ | - college_mathematics | 1|none | 0|acc |↑ |0.2400|± |0.0429|
161
+ | - college_physics | 1|none | 0|acc |↑ |0.2255|± |0.0416|
162
+ | - computer_security | 1|none | 0|acc |↑ |0.2700|± |0.0446|
163
+ | - conceptual_physics | 1|none | 0|acc |↑ |0.2468|± |0.0282|
164
+ | - electrical_engineering | 1|none | 0|acc |↑ |0.2552|± |0.0363|
165
+ | - elementary_mathematics | 1|none | 0|acc |↑ |0.2407|± |0.0220|
166
+ | - high_school_biology | 1|none | 0|acc |↑ |0.1710|± |0.0214|
167
+ | - high_school_chemistry | 1|none | 0|acc |↑ |0.1724|± |0.0266|
168
+ | - high_school_computer_science | 1|none | 0|acc |↑ |0.2600|± |0.0441|
169
+ | - high_school_mathematics | 1|none | 0|acc |↑ |0.2519|± |0.0265|
170
+ | - high_school_physics | 1|none | 0|acc |↑ |0.1457|± |0.0288|
171
+ | - high_school_statistics | 1|none | 0|acc |↑ |0.2083|± |0.0277|
172
+ | - machine_learning | 1|none | 0|acc |↑ |0.3571|± |0.0455|
173
+ |truthfulqa_mc2 | 2|none | 0|acc |↑ |0.4506|± |0.0161|
174
+ |winogrande | 1|none | 0|acc |↑ |0.5288|± |0.0140|
175
+
176
+ | Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
177
+ |------------------|------:|------|------|------|---|-----:|---|-----:|
178
+ |mmlu | 2|none | |acc |↑ |0.2349|± |0.0036|
179
+ | - humanities | 2|none | |acc |↑ |0.2461|± |0.0063|
180
+ | - other | 2|none | |acc |↑ |0.2391|± |0.0076|
181
+ | - social sciences| 2|none | |acc |↑ |0.2246|± |0.0075|
182
+ | - stem | 2|none | |acc |↑ |0.2239|± |0.0074|
scripts/TRAIN.md CHANGED
@@ -57,7 +57,9 @@ model.save_pretrained('out/converted_model/')
57
  ## Evaluate
58
 
59
  ```bash
60
- # litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --batch_size 8 out/pretrain/final/
61
 
62
- litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,mmlu_pro,winogrande,arc_challenge,leaderboard,ifeval,mgsm_direct,mathqa,gpqa' --batch_size 8 out/pretrain/final/
 
 
63
  ```
 
57
  ## Evaluate
58
 
59
  ```bash
60
+ litgpt evaluate --tasks 'leaderboard' --out_dir 'evaluate-0/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
61
 
62
+ litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --out_dir 'evaluate-1/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
63
+
64
+ litgpt evaluate --tasks 'mmlu_pro,ifeval,mgsm_direct,mathqa,gpqa' --out_dir 'evaluate-2/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
65
  ```