Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,8 @@ This repository is based on the Meta-Llama-3.1-8B-Instruct model and is governed
|
|
13 |
|
14 |
Llama-3.1-8B-ArliAI-RPMax-v1.1 is a variant of the Meta-Llama-3.1-8B model, trained on a diverse set of curated RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive, with a unique approach to training that minimizes repetition.
|
15 |
|
|
|
|
|
16 |
### Training Details
|
17 |
|
18 |
* **Sequence Length**: 8192
|
@@ -22,7 +24,7 @@ Llama-3.1-8B-ArliAI-RPMax-v1.1 is a variant of the Meta-Llama-3.1-8B model, trai
|
|
22 |
|
23 |
## Quantization
|
24 |
|
25 |
-
The model is available in
|
26 |
|
27 |
* **FP16**: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.1
|
28 |
* **GGUF**: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.1-GGUF
|
|
|
13 |
|
14 |
Llama-3.1-8B-ArliAI-RPMax-v1.1 is a variant of the Meta-Llama-3.1-8B model, trained on a diverse set of curated RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive, with a unique approach to training that minimizes repetition.
|
15 |
|
16 |
+
v1.1 is just a small fix to not train and save the embeddings layer, since v1.0 had the lm_head unnecessarily trained on accident.
|
17 |
+
|
18 |
### Training Details
|
19 |
|
20 |
* **Sequence Length**: 8192
|
|
|
24 |
|
25 |
## Quantization
|
26 |
|
27 |
+
The model is available in quantized formats:
|
28 |
|
29 |
* **FP16**: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.1
|
30 |
* **GGUF**: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.1-GGUF
|