mtasic85 commited on
Commit
1965856
1 Parent(s): 5bd56de

readme, logo

Browse files
Files changed (2) hide show
  1. README.md +47 -3
  2. misc/logo.png +3 -0
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ language: ['en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el', 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he', 'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my', 'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si', 'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn', 'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu']
6
+ datasets: [
7
+ 'Replete-AI/Everything_Instruct_Multilingual',
8
+ 'HuggingFaceH4/ultrachat_200k',
9
+ 'HuggingFaceH4/no_robots',
10
+ 'datatab/ultrachat_200k_serbian',
11
+ 'datatab/ultrafeedback_binarized_serbian',
12
+ 'datatab/alpaca-cleaned-serbian-full',
13
+ 'datatab/orca_math_world_problem_200k_serbian',
14
+ 'datatab/open-orca-slim-serbian',
15
+ ]
16
+ tags:
17
+ - litgpt
18
+ - litdata
19
+ ---
20
+
21
+ # tangled-llama-33m-32k-instruct-v0.1
22
+
23
+ ![logo](./misc/logo.png)
24
+
25
+ A pretrained language model based on the Llama model with about **33M** parameters. This model has been trained on **4.2B** (`4,252,334,823`) tokens from more than **6.2M** (`6,271,145`) dataset rows.
26
+
27
+ This model **isn't** designed for immediate use but rather for Continued Pretraining and Finetuning on a downstream task. While it can handle a context length of up to **32K** (`32,768`) tokens, it was pretrained with sequences of **32K** (`32768`) tokens.
28
+
29
+ The objective is to streamline the cognitive or reasoning core, eliminating any redundant knowledge from the model.
30
+
31
+ [loss, val_loss](https://api.wandb.ai/links/mtasic85/rx2cm1ip)
32
+
33
+ [val_ppl](https://api.wandb.ai/links/mtasic85/okegm8vs)
34
+
35
+ [epoch](https://api.wandb.ai/links/mtasic85/t5lojxa6)
36
+
37
+ [learning_rate](https://api.wandb.ai/links/mtasic85/033xhutk)
38
+
39
+ ## lm-evaluation-harness
40
+
41
+ ```bash
42
+ litgpt evaluate --tasks 'leaderboard' --out_dir 'evaluate-0/' --batch_size 4 --dtype 'bfloat16' out/contrain/final/
43
+ ```
44
+
45
+ ```bash
46
+ litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --out_dir 'evaluate-1/' --batch_size 4 --dtype 'bfloat16' out/contrain/final/
47
+ ```
misc/logo.png ADDED

Git LFS Details

  • SHA256: 31349390ed3a7d997e08788752c5f6120d455b96fe337f242a2b4137da3c3141
  • Pointer size: 132 Bytes
  • Size of remote file: 1.84 MB