Delta-Vector commited on
Commit
5393fff
1 Parent(s): a3f7ac5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -9
README.md CHANGED
@@ -1,18 +1,23 @@
1
-
2
-
3
-
4
  ---
5
  License: apache-2.0
6
  Language:
7
- - En
8
  Pipeline_tag: text-generation
9
  Base_model: nvidia/Llama-3.1-Minitron-4 B-Width-Base
10
  Tags:
11
- - Chat
 
 
 
 
 
 
 
 
12
  ---
13
 
14
- ![image/png]()
15
- A model made to continue off my previous work on anthracite-org/magnum-4 b, A small model made for creative writing / General assistant tasks, finetuned ontop of [Intervitens](link), this model is made to be more coherent and generally be better then the 4 B at both writing and assistant tasks.
16
 
17
  ## Prompting
18
  Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
@@ -32,7 +37,7 @@ Can I ask a question?<|im_end|>
32
 
33
  ## Support
34
 
35
- To run inference on this model, you'll need to use Aphrodite, vLLM or EXL 2/tabbyAPI, as llama. Cpp hasn't yet merged the required pull request to fix the llama 3.1 rope_freqs issue with custom head dimensions.
36
 
37
  However, you can work around this by quantizing the model yourself to create a functional GGUF file. Note that until [this PR](https://github.com/ggerganov/llama.cpp/pull/9141) is merged, the context will be limited to 8 k tokens.
38
 
@@ -158,4 +163,4 @@ The training was done for 2 epochs. We used 2 x [RTX 6000s](https://store.nvidi
158
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
159
 
160
  ## Safety
161
- ...
 
 
 
 
1
  ---
2
  License: apache-2.0
3
  Language:
4
+ - En
5
  Pipeline_tag: text-generation
6
  Base_model: nvidia/Llama-3.1-Minitron-4 B-Width-Base
7
  Tags:
8
+ - Chat
9
+ license: agpl-3.0
10
+ datasets:
11
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
12
+ - PJMixers/lodrick-the-lafted_OpusStories-ShareGPT
13
+ - NewEden/Gryphe-3.5-16k-Subset
14
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
15
+ tags:
16
+ - chat
17
  ---
18
 
19
+ ![image/png](https://huggingface.co/Edens-Gate/Testing123/resolve/main/oie_gM9EsNXjMDsT.jpg?download=true)
20
+ A model made to continue off my previous work on [Magnum 4B](https://huggingface.co/anthracite-org/magnum-v2-4b), A small model made for creative writing / General assistant tasks, finetuned ontop of [IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml), this model is made to be more coherent and generally be better then the 4B at both writing and assistant tasks.
21
 
22
  ## Prompting
23
  Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
 
37
 
38
  ## Support
39
 
40
+ To run inference on this model, you'll need to use Aphrodite, vLLM or EXL 2/tabbyAPI, as llama.cpp hasn't yet merged the required pull request to fix the llama 3.1 rope_freqs issue with custom head dimensions.
41
 
42
  However, you can work around this by quantizing the model yourself to create a functional GGUF file. Note that until [this PR](https://github.com/ggerganov/llama.cpp/pull/9141) is merged, the context will be limited to 8 k tokens.
43
 
 
163
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
164
 
165
  ## Safety
166
+ ...