TheYuriLover's picture
Upload 10 files
0f3452c
|
raw
history blame
1.18 kB
[Airoboros 13b GPT4 1.4](https://huggingface.co./jondurbin/airoboros-13b-gpt4-1.4) merged with kaiokendev's [SuperHOT 8k](https://huggingface.co./kaiokendev/superhot-13b-8k-no-rlhf-test) LoRA.
The code to merge these can be found [here](https://files.catbox.moe/mg5v4g.py). Change information as needed.
NOTE: This requires a monkey patch to work. FlashVenom has, along with kindly quantising this model to 4bit, added the monkeypatch file to their repo. You can access this [here](https://huggingface.co./flashvenom/Airoboros-13B-SuperHOT-8K-4bit-GPTQ).
FROM THE ORIGINAL LORA MODEL CARD:
This is a second prototype of SuperHOT, this time with 4K context and no RLHF. In my testing, it can go all the way to 6K without breaking down and I made the change with intention to reach 8K, so I'll assume it will go to 8K although I only trained on 4K sequences.
In order to use the 8K context, you will need to apply the monkeypatch I have added in this repo -- without it, it will not work. The patch is very simple, and you can make the changes yourself:
Increase the max_position_embeddings to 8192 to stretch the sinusoidal
Stretch the frequency steps by a scale of 0.25