ajibawa-2023
commited on
Commit
•
a1f4262
1
Parent(s):
3968e7d
Update README.md
Browse files
README.md
CHANGED
@@ -14,14 +14,12 @@ tags:
|
|
14 |
|
15 |
This Model is trained on my [WikiHow](https://huggingface.co/datasets/ajibawa-2023/WikiHow) dataset.
|
16 |
|
17 |
-
This model is very very good with generating tutorials in the style of WikiHow
|
18 |
The depth and accuracy of generated tutorials is exceptional.
|
19 |
The WikiHow dataset encompasses a wide array of topics, ranging from everyday tasks to specialized skills, making it an invaluable resource for refining the capabilities of language models.
|
20 |
Through this fine-tuning process, the model has been equipped with the ability to offer insightful guidance and assistance across diverse domains.
|
21 |
|
22 |
-
|
23 |
-
|
24 |
-
This is fully finetuned models. Quantized models will be available very soon.
|
25 |
|
26 |
**GPTQ, GGUF, AWQ & Exllama**
|
27 |
|
@@ -35,11 +33,13 @@ Exllama v2: TBA
|
|
35 |
|
36 |
|
37 |
**Training:**
|
|
|
38 |
Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took more than 15 Hours. Axolotl codebase was used for training purpose.
|
39 |
Entire data is trained on Mistral-7B-Instruct-v0.2.
|
40 |
|
41 |
|
42 |
**Example Prompt:**
|
|
|
43 |
This model uses **ChatML** prompt format.
|
44 |
|
45 |
```
|
|
|
14 |
|
15 |
This Model is trained on my [WikiHow](https://huggingface.co/datasets/ajibawa-2023/WikiHow) dataset.
|
16 |
|
17 |
+
This model is **very very good** with generating tutorials in the style of **WikiHow**. By leveraging this repository of practical knowledge, the model has been trained to comprehend and generate text that is highly informative and instructional in nature.
|
18 |
The depth and accuracy of generated tutorials is exceptional.
|
19 |
The WikiHow dataset encompasses a wide array of topics, ranging from everyday tasks to specialized skills, making it an invaluable resource for refining the capabilities of language models.
|
20 |
Through this fine-tuning process, the model has been equipped with the ability to offer insightful guidance and assistance across diverse domains.
|
21 |
|
22 |
+
This is fully finetuned model. Quantized models will be available very soon.
|
|
|
|
|
23 |
|
24 |
**GPTQ, GGUF, AWQ & Exllama**
|
25 |
|
|
|
33 |
|
34 |
|
35 |
**Training:**
|
36 |
+
|
37 |
Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took more than 15 Hours. Axolotl codebase was used for training purpose.
|
38 |
Entire data is trained on Mistral-7B-Instruct-v0.2.
|
39 |
|
40 |
|
41 |
**Example Prompt:**
|
42 |
+
|
43 |
This model uses **ChatML** prompt format.
|
44 |
|
45 |
```
|