Update README.md
Browse files
README.md
CHANGED
@@ -56,7 +56,6 @@ This GGUF is based on llama3-3-8B-Instruct thus ollama doesn't need anything els
|
|
56 |
After than you should be able to use this model to chat!
|
57 |
|
58 |
|
59 |
-
|
60 |
# NOTE: DISCLAIMER
|
61 |
|
62 |
Please note this is not for the purpose of production, but result of Fine Tuning through self learning
|
@@ -78,14 +77,14 @@ As the data was getting created with custom GPT2 special tokens, I had to conver
|
|
78 |
However I got creative again.. the training data has the following Template:
|
79 |
|
80 |
```
|
81 |
-
<|begin_of_text|>
|
|
|
82 |
{{.Prompt}}<|eot_id|><|start_header_id|>analysis<|end_header_id|>
|
83 |
{{.Analysis}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
84 |
{{.Response}}<|eot_id|><|start_header_id|>classification<|end_header_id|>
|
85 |
{{.Classification}}<|eot_id|><|start_header_id|>sentiment<|end_header_id|>
|
86 |
{{.Sentiment}}<|eot_id|> <|start_header_id|>user<|end_header_id|>
|
87 |
-
|
88 |
-
You're most welcome, what would like to know next?<|eot_id|>
|
89 |
|
90 |
```
|
91 |
|
|
|
56 |
After than you should be able to use this model to chat!
|
57 |
|
58 |
|
|
|
59 |
# NOTE: DISCLAIMER
|
60 |
|
61 |
Please note this is not for the purpose of production, but result of Fine Tuning through self learning
|
|
|
77 |
However I got creative again.. the training data has the following Template:
|
78 |
|
79 |
```
|
80 |
+
<|begin_of_text|>
|
81 |
+
<|start_header_id|>user<|end_header_id|>
|
82 |
{{.Prompt}}<|eot_id|><|start_header_id|>analysis<|end_header_id|>
|
83 |
{{.Analysis}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
84 |
{{.Response}}<|eot_id|><|start_header_id|>classification<|end_header_id|>
|
85 |
{{.Classification}}<|eot_id|><|start_header_id|>sentiment<|end_header_id|>
|
86 |
{{.Sentiment}}<|eot_id|> <|start_header_id|>user<|end_header_id|>
|
87 |
+
<|end_of_text|>
|
|
|
88 |
|
89 |
```
|
90 |
|