Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,14 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
4 |
## Training procedure
|
5 |
|
|
|
6 |
|
7 |
The following `bitsandbytes` quantization config was used during training:
|
8 |
- load_in_8bit: False
|
@@ -14,7 +20,5 @@ The following `bitsandbytes` quantization config was used during training:
|
|
14 |
- bnb_4bit_quant_type: fp4
|
15 |
- bnb_4bit_use_double_quant: False
|
16 |
- bnb_4bit_compute_dtype: float32
|
17 |
-
### Framework versions
|
18 |
-
|
19 |
|
20 |
-
|
|
|
1 |
+
Join the Coffee & AI Discord for AI Stuff and things!
|
2 |
+
[![Discord](https://img.shields.io/discord/232596713892872193?logo=discord)](https://discord.gg/2JhHVh7CGu)
|
3 |
+
|
4 |
+
|
5 |
+
https://huggingface.co/TheBloke/Llama-2-13B-GGML
|
6 |
+
https://huggingface.co/TheBloke/Llama-2-13B-GPTQ
|
7 |
+
|
8 |
+
|
9 |
## Training procedure
|
10 |
|
11 |
+
PEFT:
|
12 |
|
13 |
The following `bitsandbytes` quantization config was used during training:
|
14 |
- load_in_8bit: False
|
|
|
20 |
- bnb_4bit_quant_type: fp4
|
21 |
- bnb_4bit_use_double_quant: False
|
22 |
- bnb_4bit_compute_dtype: float32
|
|
|
|
|
23 |
|
24 |
+
This ran on for 3500 -- 3 epochs on an in testing storywriting dataset. Training took 14 hours on a 3090 Ti.
|