Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,27 @@ language:
|
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
8 |
---
|
9 |
-
## Training procedure
|
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
The following `bitsandbytes` quantization config was used during training:
|
13 |
- load_in_8bit: False
|
|
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
8 |
---
|
|
|
9 |
|
10 |
+
# ctrltokyo/llama-2-7b-hf-dolly-flash-attention
|
11 |
+
|
12 |
+
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the databricks/databricks-dolly-15k dataset with all training performed using Flash Attention 2.
|
13 |
+
|
14 |
+
No further testing or optimisation has been performed.
|
15 |
+
|
16 |
+
## Model description
|
17 |
+
|
18 |
+
Just like [ctrltokyo/llm_prompt_mask_fill_model](https://huggingface.co/ctrltokyo/llm_prompt_mask_fill_model), this model could be used for live autocompletion of PROMPTS, but is more designed for a generalized chatbot (hence the usage of the Dolly 15k dataset). Don't try this on code, because it won't work.
|
19 |
+
I plan to release a further fine-tuned version using the [code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k) dataset.
|
20 |
+
|
21 |
+
## Intended uses & limitations
|
22 |
+
|
23 |
+
Use as intended.
|
24 |
+
|
25 |
+
## Training and evaluation data
|
26 |
+
|
27 |
+
No evaluation was performed. Trained on NVIDIA A100, but appears to use around 20GB of VRAM when performing inference on the raw model.
|
28 |
+
|
29 |
+
## Training procedure
|
30 |
|
31 |
The following `bitsandbytes` quantization config was used during training:
|
32 |
- load_in_8bit: False
|