Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ datasets:
|
|
5 |
# RoPE Scaled QLoRA Finetune of airoboros-33b-gpt4-1.4.1 (GPTQ)
|
6 |
## Overview
|
7 |
|
8 |
-
This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (merged model GPTQ Quantization
|
9 |
- Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b.
|
10 |
- Training sequences beyond 2048 have the target truncated to equal 2048.
|
11 |
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
|
@@ -32,7 +32,7 @@ Recent advancements in extending context by RoPE scaling ([kaiokendev](https://k
|
|
32 |
|
33 |
## Quantization:
|
34 |
|
35 |
-
The merged model was quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True).
|
36 |
|
37 |
## Prompting:
|
38 |
|
|
|
5 |
# RoPE Scaled QLoRA Finetune of airoboros-33b-gpt4-1.4.1 (GPTQ)
|
6 |
## Overview
|
7 |
|
8 |
+
This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (merged model with GPTQ Quantization) with several key modifications:
|
9 |
- Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b.
|
10 |
- Training sequences beyond 2048 have the target truncated to equal 2048.
|
11 |
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
|
|
|
32 |
|
33 |
## Quantization:
|
34 |
|
35 |
+
The merged model was quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True).
|
36 |
|
37 |
## Prompting:
|
38 |
|