Initial merged FP16 model commit
Browse files
README.md
ADDED
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: other
|
4 |
+
---
|
5 |
+
|
6 |
+
<!-- header start -->
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
<!-- header end -->
|
19 |
+
|
20 |
+
# Lilloukas' Platypus 30B fp16
|
21 |
+
|
22 |
+
These files are GPTQ 4bit model files for [Lilloukas' Platypus 30B](https://huggingface.co/lilloukas/Platypus-30B) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test).
|
23 |
+
|
24 |
+
[Kaio Ken's SuperHOT 30b LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
|
25 |
+
|
26 |
+
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
|
27 |
+
|
28 |
+
## Repositories available
|
29 |
+
|
30 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Platypus-30B-SuperHOT-8K-GPTQ)
|
31 |
+
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Platypus-30B-SuperHOT-8K-fp16)
|
32 |
+
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lilloukas/Platypus-30B)
|
33 |
+
|
34 |
+
<!-- footer start -->
|
35 |
+
## Discord
|
36 |
+
|
37 |
+
For further support, and discussions on these models and AI in general, join us at:
|
38 |
+
|
39 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
40 |
+
|
41 |
+
## Thanks, and how to contribute.
|
42 |
+
|
43 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
44 |
+
|
45 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
46 |
+
|
47 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
48 |
+
|
49 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
50 |
+
|
51 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
52 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
53 |
+
|
54 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
55 |
+
|
56 |
+
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
|
57 |
+
|
58 |
+
Thank you to all my generous patrons and donaters!
|
59 |
+
|
60 |
+
<!-- footer end -->
|
61 |
+
|
62 |
+
# Original model card: Kaio Ken's SuperHOT 8K
|
63 |
+
|
64 |
+
### SuperHOT Prototype 2 w/ 8K Context
|
65 |
+
|
66 |
+
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
|
67 |
+
Tests have shown that the model does indeed leverage the extended context at 8K.
|
68 |
+
|
69 |
+
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
|
70 |
+
|
71 |
+
#### Looking for Merged & Quantized Models?
|
72 |
+
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
|
73 |
+
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
|
74 |
+
|
75 |
+
|
76 |
+
#### Training Details
|
77 |
+
I trained the LoRA with the following configuration:
|
78 |
+
- 1200 samples (~400 samples over 2048 sequence length)
|
79 |
+
- learning rate of 3e-4
|
80 |
+
- 3 epochs
|
81 |
+
- The exported modules are:
|
82 |
+
- q_proj
|
83 |
+
- k_proj
|
84 |
+
- v_proj
|
85 |
+
- o_proj
|
86 |
+
- no bias
|
87 |
+
- Rank = 4
|
88 |
+
- Alpha = 8
|
89 |
+
- no dropout
|
90 |
+
- weight decay of 0.1
|
91 |
+
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
|
92 |
+
- Trained on 4-bit base model
|
93 |
+
|
94 |
+
# Original model card: Lilloukas' Platypus 30B
|
95 |
+
|
96 |
+
|
97 |
+
# 🥳 Platypus-30B has arrived!
|
98 |
+
|
99 |
+
Platypus-30B is an instruction fine-tuned model based on the LLaMA-30B transformer architecture.
|
100 |
+
|
101 |
+
| Metric | Value |
|
102 |
+
|-----------------------|-------|
|
103 |
+
| MMLU (5-shot) | 64.2 |
|
104 |
+
| ARC (25-shot) | 64.6 |
|
105 |
+
| HellaSwag (10-shot) | 84.3 |
|
106 |
+
| TruthfulQA (0-shot) | 45.8 |
|
107 |
+
| Avg. | 64.7 |
|
108 |
+
|
109 |
+
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above.
|
110 |
+
|
111 |
+
## Model Details
|
112 |
+
|
113 |
+
* **Trained by**: Cole Hunter & Ariel Lee
|
114 |
+
* **Model type:** **Platypus-30B** is an auto-regressive language model based on the LLaMA transformer architecture.
|
115 |
+
* **Language(s)**: English
|
116 |
+
* **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
|
117 |
+
|
118 |
+
| Hyperparameter | Value |
|
119 |
+
|---------------------------|-------|
|
120 |
+
| \\(n_\text{parameters}\\) | 33B |
|
121 |
+
| \\(d_\text{model}\\) | 6656 |
|
122 |
+
| \\(n_\text{layers}\\) | 60 |
|
123 |
+
| \\(n_\text{heads}\\) | 52 |
|
124 |
+
|
125 |
+
## Training Dataset
|
126 |
+
|
127 |
+
Dataset of highly filtered and curated question and answer pairs. Release TBD.
|
128 |
+
|
129 |
+
## Training Procedure
|
130 |
+
|
131 |
+
`lilloukas/Platypus-30B` was instruction fine-tuned using LoRA on 4 A100 80GB. For training details and inference instructions please see the [Platypus-30B](https://github.com/arielnlee/Platypus-30B.git) GitHub repo.
|
132 |
+
|
133 |
+
## Reproducing Evaluation Results
|
134 |
+
Install LM Evaluation Harness:
|
135 |
+
```
|
136 |
+
git clone https://github.com/EleutherAI/lm-evaluation-harness
|
137 |
+
cd lm-evaluation-harness
|
138 |
+
pip install -e .
|
139 |
+
```
|
140 |
+
Each task was evaluated on a single A100 80GB GPU.
|
141 |
+
|
142 |
+
ARC:
|
143 |
+
```
|
144 |
+
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25
|
145 |
+
```
|
146 |
+
|
147 |
+
HellaSwag:
|
148 |
+
```
|
149 |
+
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10
|
150 |
+
```
|
151 |
+
|
152 |
+
MMLU:
|
153 |
+
```
|
154 |
+
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5
|
155 |
+
```
|
156 |
+
|
157 |
+
TruthfulQA:
|
158 |
+
```
|
159 |
+
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda
|
160 |
+
```
|
161 |
+
## Limitations and bias
|
162 |
+
|
163 |
+
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
|
164 |
+
|
165 |
+
## Citations
|
166 |
+
|
167 |
+
```bibtex
|
168 |
+
@article{touvron2023llama,
|
169 |
+
title={LLaMA: Open and Efficient Foundation Language Models},
|
170 |
+
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
|
171 |
+
journal={arXiv preprint arXiv:2302.13971},
|
172 |
+
year={2023}
|
173 |
+
}
|
174 |
+
|
175 |
+
@article{hu2021lora,
|
176 |
+
title={LoRA: Low-Rank Adaptation of Large Language Models},
|
177 |
+
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
|
178 |
+
journal={CoRR},
|
179 |
+
year={2021}
|
180 |
+
}
|
181 |
+
```
|