Update README.md
Browse files
README.md
CHANGED
@@ -2,11 +2,12 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
# Llama3 8B
|
|
|
5 |
Llama 3 is the latest and most advanced LLM trained over 15T tokens, which improves its comprehension and handling of complex language nuances. It features an extended context window of 8k tokens allowing the model to access more information from lengthy passages for more informed decision-making.
|
6 |
|
7 |
**Model Intention:** The latest Llama 3 enabling more accurate and informative responses to complex queries in both English and multilingual contexts.
|
8 |
|
9 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
10 |
|
11 |
**Model Info URL:** [https://huggingface.co/meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|
12 |
|
@@ -21,6 +22,7 @@ Llama 3 is the latest and most advanced LLM trained over 15T tokens, which impro
|
|
21 |
**Context Length:** 8192 tokens
|
22 |
|
23 |
**Prompt Format:**
|
|
|
24 |
```
|
25 |
<|start_header_id|>user<|end_header_id|>
|
26 |
|
@@ -28,7 +30,7 @@ Llama 3 is the latest and most advanced LLM trained over 15T tokens, which impro
|
|
28 |
|
29 |
assistant
|
30 |
|
31 |
-
```
|
32 |
|
33 |
**Template Name:** llama
|
34 |
|
@@ -42,11 +44,12 @@ assistant
|
|
42 |
---
|
43 |
|
44 |
# LiteLlama
|
|
|
45 |
It's a very small LLAMA2 model with only 460M parameters trained with 1T tokens. It's best for testing.
|
46 |
|
47 |
**Model Intention:** This is a 460 parameters' very small model for test purpose only
|
48 |
|
49 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
50 |
|
51 |
**Model Info URL:** [https://huggingface.co/ahxt/LiteLlama-460M-1T](https://huggingface.co/ahxt/LiteLlama-460M-1T)
|
52 |
|
@@ -61,10 +64,11 @@ It's a very small LLAMA2 model with only 460M parameters trained with 1T tokens.
|
|
61 |
**Context Length:** 1024 tokens
|
62 |
|
63 |
**Prompt Format:**
|
|
|
64 |
```
|
65 |
<human>: {{prompt}}
|
66 |
<bot>:
|
67 |
-
```
|
68 |
|
69 |
**Template Name:** TinyLlama
|
70 |
|
@@ -78,11 +82,12 @@ It's a very small LLAMA2 model with only 460M parameters trained with 1T tokens.
|
|
78 |
---
|
79 |
|
80 |
# TinyLlama-1.1B-chat
|
|
|
81 |
The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of just 90 days using 16 A100-40G GPUs. The training has started on 2023-09-01.
|
82 |
|
83 |
**Model Intention:** It's good for question & answer.
|
84 |
|
85 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
86 |
|
87 |
**Model Info URL:** [https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
|
88 |
|
@@ -97,9 +102,10 @@ The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens.
|
|
97 |
**Context Length:** 4096 tokens
|
98 |
|
99 |
**Prompt Format:**
|
|
|
100 |
```
|
101 |
<|user|>{{prompt}}</s><|assistant|>
|
102 |
-
```
|
103 |
|
104 |
**Template Name:** TinyLlama
|
105 |
|
@@ -113,11 +119,12 @@ The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens.
|
|
113 |
---
|
114 |
|
115 |
# Mistral 7B v0.2
|
|
|
116 |
The Mistral-7B-v0.2 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.2 outperforms Llama 2 13B on all benchmarks we tested.
|
117 |
|
118 |
**Model Intention:** It's a 7B large model for Q&A purpose. But it requires a high-end device to run.
|
119 |
|
120 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
121 |
|
122 |
**Model Info URL:** [https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
123 |
|
@@ -132,9 +139,10 @@ The Mistral-7B-v0.2 Large Language Model (LLM) is a pretrained generative text m
|
|
132 |
**Context Length:** 4096 tokens
|
133 |
|
134 |
**Prompt Format:**
|
|
|
135 |
```
|
136 |
<s>[INST]{{prompt}}[/INST]</s>
|
137 |
-
```
|
138 |
|
139 |
**Template Name:** Mistral
|
140 |
|
@@ -148,11 +156,12 @@ The Mistral-7B-v0.2 Large Language Model (LLM) is a pretrained generative text m
|
|
148 |
---
|
149 |
|
150 |
# OpenChat 3.5(0106)
|
|
|
151 |
OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|
152 |
|
153 |
**Model Intention:** It's a 7B large model and performs really good for Q&A. But it requires a high-end device to run.
|
154 |
|
155 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
156 |
|
157 |
**Model Info URL:** [https://huggingface.co/openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5)
|
158 |
|
@@ -167,9 +176,10 @@ OpenChat is an innovative library of open-source language models, fine-tuned wit
|
|
167 |
**Context Length:** 8192 tokens
|
168 |
|
169 |
**Prompt Format:**
|
|
|
170 |
```
|
171 |
GPT4 Correct User: {{prompt}}<|end_of_turn|>GPT4 Correct Assistant:
|
172 |
-
```
|
173 |
|
174 |
**Template Name:** Mistral
|
175 |
|
@@ -182,32 +192,35 @@ GPT4 Correct User: {{prompt}}<|end_of_turn|>GPT4 Correct Assistant:
|
|
182 |
|
183 |
---
|
184 |
|
185 |
-
# Phi-
|
186 |
-
|
|
|
187 |
|
188 |
-
**Model Intention:** It's a
|
189 |
|
190 |
-
**Model URL:** [https://huggingface.co/
|
191 |
|
192 |
-
**Model Info URL:** [https://huggingface.co/microsoft/
|
193 |
|
194 |
**Model License:** [License Info](https://opensource.org/license/mit)
|
195 |
|
196 |
-
**Model Description:** Phi-
|
197 |
|
198 |
-
**Developer:** [https://huggingface.co/microsoft/
|
199 |
|
200 |
-
**File Size:**
|
201 |
|
202 |
-
**Context Length:**
|
203 |
|
204 |
**Prompt Format:**
|
|
|
|
|
|
|
|
|
|
|
205 |
```
|
206 |
-
Instruct: {{prompt}}
|
207 |
-
Output:
|
208 |
-
```
|
209 |
|
210 |
-
**Template Name:**
|
211 |
|
212 |
**Add BOS Token:** Yes
|
213 |
|
@@ -219,11 +232,12 @@ Output:
|
|
219 |
---
|
220 |
|
221 |
# Yi 6B Chat
|
|
|
222 |
The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI. Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, For English language capability, the Yi series models ranked 2nd (just behind GPT-4), outperforming other LLMs (such as LLaMA2-chat-70B, Claude 2, and ChatGPT) on the AlpacaEval Leaderboard in Dec 2023. For Chinese language capability, the Yi series models landed in 2nd place (following GPT-4), surpassing other LLMs (such as Baidu ERNIE, Qwen, and Baichuan) on the SuperCLUE in Oct 2023.
|
223 |
|
224 |
**Model Intention:** It's a 6B model and can understand English and Chinese. It's good for QA and Chat
|
225 |
|
226 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
227 |
|
228 |
**Model Info URL:** [https://huggingface.co/01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat)
|
229 |
|
@@ -238,13 +252,14 @@ The Yi series models are the next generation of open-source large language model
|
|
238 |
**Context Length:** 4096 tokens
|
239 |
|
240 |
**Prompt Format:**
|
|
|
241 |
```
|
242 |
<|im_start|>user
|
243 |
<|im_end|>
|
244 |
{{prompt}}
|
245 |
<|im_start|>assistant
|
246 |
|
247 |
-
```
|
248 |
|
249 |
**Template Name:** yi
|
250 |
|
@@ -258,11 +273,12 @@ The Yi series models are the next generation of open-source large language model
|
|
258 |
---
|
259 |
|
260 |
# Google Gemma 2B
|
|
|
261 |
Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is named after the Latin gemma, meaning 'precious stone.' The Gemma model weights are supported by developer tools that promote innovation, collaboration, and the responsible use of artificial intelligence (AI).
|
262 |
|
263 |
**Model Intention:** It's a 2B large model for Q&A purpose. But it requires a high-end device to run.
|
264 |
|
265 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
266 |
|
267 |
**Model Info URL:** [https://huggingface.co/google/gemma-2b](https://huggingface.co/google/gemma-2b)
|
268 |
|
@@ -277,12 +293,13 @@ Gemma is a family of lightweight, state-of-the-art open models built from the sa
|
|
277 |
**Context Length:** 8192 tokens
|
278 |
|
279 |
**Prompt Format:**
|
|
|
280 |
```
|
281 |
<bos><start_of_turn>user
|
282 |
{{prompt}}<end_of_turn>
|
283 |
<start_of_turn>model
|
284 |
|
285 |
-
```
|
286 |
|
287 |
**Template Name:** gemma
|
288 |
|
@@ -296,11 +313,12 @@ Gemma is a family of lightweight, state-of-the-art open models built from the sa
|
|
296 |
---
|
297 |
|
298 |
# StarCoder2 3B
|
|
|
299 |
StarCoder2-3B model is a 3B parameter model trained on 17 programming languages from The Stack v2, with opt-out requests excluded. The model uses Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and was trained using the Fill-in-the-Middle objective on 3+ trillion tokens
|
300 |
|
301 |
**Model Intention:** The model is good at 17 programming languages. By just start with your codes, the model will finish it.
|
302 |
|
303 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
304 |
|
305 |
**Model Info URL:** [https://huggingface.co/bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b)
|
306 |
|
@@ -315,10 +333,11 @@ StarCoder2-3B model is a 3B parameter model trained on 17 programming languages
|
|
315 |
**Context Length:** 16384 tokens
|
316 |
|
317 |
**Prompt Format:**
|
|
|
318 |
```
|
319 |
{{prompt}}
|
320 |
|
321 |
-
```
|
322 |
|
323 |
**Template Name:** starcoder
|
324 |
|
@@ -332,11 +351,12 @@ StarCoder2-3B model is a 3B parameter model trained on 17 programming languages
|
|
332 |
---
|
333 |
|
334 |
# Chinese Tiny LLM 2B
|
|
|
335 |
Chinese Tiny LLM 2B 是首个以中文为中心的大型语言模型,主要在中文语料库上进行预训练和微调,提供了对潜在偏见、中文语言能力和多语言适应性的重要洞见。
|
336 |
|
337 |
**Model Intention:** 这是一个参数规模2B的中文模型,具有很好的中文理解和应答能力
|
338 |
|
339 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
340 |
|
341 |
**Model Info URL:** [https://chinese-tiny-llm.github.io/](https://chinese-tiny-llm.github.io/)
|
342 |
|
@@ -351,13 +371,14 @@ Chinese Tiny LLM 2B 是首个以中文为中心的大型语言模型,主要在
|
|
351 |
**Context Length:** 4096 tokens
|
352 |
|
353 |
**Prompt Format:**
|
|
|
354 |
```
|
355 |
<|im_start|>user
|
356 |
{{prompt}}
|
357 |
<|im_end|>
|
358 |
<|im_start|>assistant
|
359 |
|
360 |
-
```
|
361 |
|
362 |
**Template Name:** chatml
|
363 |
|
@@ -371,11 +392,12 @@ Chinese Tiny LLM 2B 是首个以中文为中心的大型语言模型,主要在
|
|
371 |
---
|
372 |
|
373 |
# Qwen1.5 4B Chat
|
|
|
374 |
Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group. It supports both Chinese and English. 通义千问是阿里巴巴公司开发的大大预言模型,支持中英文双语。
|
375 |
|
376 |
**Model Intention:** It's one of the best LLM that supports Chinese and English. 这是支持中英双语的最佳的大语言模型之一。
|
377 |
|
378 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
379 |
|
380 |
**Model Info URL:** [https://huggingface.co/Qwen/Qwen1.5-4B-Chat-GGUF](https://huggingface.co/Qwen/Qwen1.5-4B-Chat-GGUF)
|
381 |
|
@@ -390,13 +412,14 @@ Qwen is the large language model and large multimodal model series of the Qwen T
|
|
390 |
**Context Length:** 32768 tokens
|
391 |
|
392 |
**Prompt Format:**
|
|
|
393 |
```
|
394 |
<|im_start|>user
|
395 |
{{prompt}}
|
396 |
<|im_end|>
|
397 |
<|im_start|>assistant
|
398 |
|
399 |
-
```
|
400 |
|
401 |
**Template Name:** chatml
|
402 |
|
@@ -410,11 +433,12 @@ Qwen is the large language model and large multimodal model series of the Qwen T
|
|
410 |
---
|
411 |
|
412 |
# Dophin 2.8 Mistralv02 7B
|
|
|
413 |
This model is based on Mistral-7b-v0.2 with 16k context lengths. It's a uncensored model and supports a variety of instruction, conversational, and coding skills.
|
414 |
|
415 |
**Model Intention:** It's a uncensored and good skilled English modal best for high performance iPhone, iPad & Mac
|
416 |
|
417 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
418 |
|
419 |
**Model Info URL:** [https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
|
420 |
|
@@ -429,13 +453,14 @@ This model is based on Mistral-7b-v0.2 with 16k context lengths. It's a uncensor
|
|
429 |
**Context Length:** 32768 tokens
|
430 |
|
431 |
**Prompt Format:**
|
|
|
432 |
```
|
433 |
<s><|im_start|>user
|
434 |
{{prompt}}
|
435 |
<|im_end|>
|
436 |
<|im_start|>assistant
|
437 |
|
438 |
-
```
|
439 |
|
440 |
**Template Name:** chatml
|
441 |
|
@@ -449,11 +474,12 @@ This model is based on Mistral-7b-v0.2 with 16k context lengths. It's a uncensor
|
|
449 |
---
|
450 |
|
451 |
# WizardLM-2 7B
|
|
|
452 |
The WizardLM-2 is one of the next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent.
|
453 |
|
454 |
**Model Intention:** It's a state-of-the-art large language model with improved performance on complex chat, multilingual, reasoning and agent.
|
455 |
|
456 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/
|
457 |
|
458 |
**Model Info URL:** [https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF)
|
459 |
|
@@ -468,11 +494,12 @@ The WizardLM-2 is one of the next generation state-of-the-art large language mod
|
|
468 |
**Context Length:** 32768 tokens
|
469 |
|
470 |
**Prompt Format:**
|
|
|
471 |
```
|
472 |
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
|
473 |
USER: {{prompt}}
|
474 |
ASSISTANT:
|
475 |
-
```
|
476 |
|
477 |
**Template Name:** chatml
|
478 |
|
@@ -480,7 +507,4 @@ ASSISTANT:
|
|
480 |
|
481 |
**Add EOS Token:** No
|
482 |
|
483 |
-
**Parse Special Tokens:** Yes
|
484 |
-
|
485 |
-
|
486 |
-
---
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
# Llama3 8B
|
5 |
+
|
6 |
Llama 3 is the latest and most advanced LLM trained over 15T tokens, which improves its comprehension and handling of complex language nuances. It features an extended context window of 8k tokens allowing the model to access more information from lengthy passages for more informed decision-making.
|
7 |
|
8 |
**Model Intention:** The latest Llama 3 enabling more accurate and informative responses to complex queries in both English and multilingual contexts.
|
9 |
|
10 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf?download=true)
|
11 |
|
12 |
**Model Info URL:** [https://huggingface.co/meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|
13 |
|
|
|
22 |
**Context Length:** 8192 tokens
|
23 |
|
24 |
**Prompt Format:**
|
25 |
+
|
26 |
```
|
27 |
<|start_header_id|>user<|end_header_id|>
|
28 |
|
|
|
30 |
|
31 |
assistant
|
32 |
|
33 |
+
```
|
34 |
|
35 |
**Template Name:** llama
|
36 |
|
|
|
44 |
---
|
45 |
|
46 |
# LiteLlama
|
47 |
+
|
48 |
It's a very small LLAMA2 model with only 460M parameters trained with 1T tokens. It's best for testing.
|
49 |
|
50 |
**Model Intention:** This is a 460 parameters' very small model for test purpose only
|
51 |
|
52 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/LiteLlama-460M-1T-Q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/LiteLlama-460M-1T-Q8_0.gguf?download=true)
|
53 |
|
54 |
**Model Info URL:** [https://huggingface.co/ahxt/LiteLlama-460M-1T](https://huggingface.co/ahxt/LiteLlama-460M-1T)
|
55 |
|
|
|
64 |
**Context Length:** 1024 tokens
|
65 |
|
66 |
**Prompt Format:**
|
67 |
+
|
68 |
```
|
69 |
<human>: {{prompt}}
|
70 |
<bot>:
|
71 |
+
```
|
72 |
|
73 |
**Template Name:** TinyLlama
|
74 |
|
|
|
82 |
---
|
83 |
|
84 |
# TinyLlama-1.1B-chat
|
85 |
+
|
86 |
The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of just 90 days using 16 A100-40G GPUs. The training has started on 2023-09-01.
|
87 |
|
88 |
**Model Intention:** It's good for question & answer.
|
89 |
|
90 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/tinyllama-1.1B-chat-v1.0-Q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/tinyllama-1.1B-chat-v1.0-Q8_0.gguf?download=true)
|
91 |
|
92 |
**Model Info URL:** [https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
|
93 |
|
|
|
102 |
**Context Length:** 4096 tokens
|
103 |
|
104 |
**Prompt Format:**
|
105 |
+
|
106 |
```
|
107 |
<|user|>{{prompt}}</s><|assistant|>
|
108 |
+
```
|
109 |
|
110 |
**Template Name:** TinyLlama
|
111 |
|
|
|
119 |
---
|
120 |
|
121 |
# Mistral 7B v0.2
|
122 |
+
|
123 |
The Mistral-7B-v0.2 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.2 outperforms Llama 2 13B on all benchmarks we tested.
|
124 |
|
125 |
**Model Intention:** It's a 7B large model for Q&A purpose. But it requires a high-end device to run.
|
126 |
|
127 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/mistral-7b-instruct-v0.2.Q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/mistral-7b-instruct-v0.2.Q8_0.gguf?download=true)
|
128 |
|
129 |
**Model Info URL:** [https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
130 |
|
|
|
139 |
**Context Length:** 4096 tokens
|
140 |
|
141 |
**Prompt Format:**
|
142 |
+
|
143 |
```
|
144 |
<s>[INST]{{prompt}}[/INST]</s>
|
145 |
+
```
|
146 |
|
147 |
**Template Name:** Mistral
|
148 |
|
|
|
156 |
---
|
157 |
|
158 |
# OpenChat 3.5(0106)
|
159 |
+
|
160 |
OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|
161 |
|
162 |
**Model Intention:** It's a 7B large model and performs really good for Q&A. But it requires a high-end device to run.
|
163 |
|
164 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/openchat-3.5-0106.Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/openchat-3.5-0106.Q3_K_M.gguf?download=true)
|
165 |
|
166 |
**Model Info URL:** [https://huggingface.co/openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5)
|
167 |
|
|
|
176 |
**Context Length:** 8192 tokens
|
177 |
|
178 |
**Prompt Format:**
|
179 |
+
|
180 |
```
|
181 |
GPT4 Correct User: {{prompt}}<|end_of_turn|>GPT4 Correct Assistant:
|
182 |
+
```
|
183 |
|
184 |
**Template Name:** Mistral
|
185 |
|
|
|
192 |
|
193 |
---
|
194 |
|
195 |
+
# Phi-3 4K
|
196 |
+
|
197 |
+
The Phi-3 4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model. It is optimized for the instruction following and safety measures. It is good at common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
|
198 |
|
199 |
+
**Model Intention:** It's a 3B model with 4K context optimized for language understanding, math, code and logical reasoning
|
200 |
|
201 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Phi-3-mini-4k-instruct-q4.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Phi-3-mini-4k-instruct-q4.gguf?download=true)
|
202 |
|
203 |
+
**Model Info URL:** [https://huggingface.co/microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
|
204 |
|
205 |
**Model License:** [License Info](https://opensource.org/license/mit)
|
206 |
|
207 |
+
**Model Description:** The Phi-3 4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model. It is optimized for the instruction following and safety measures. It is good at common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
|
208 |
|
209 |
+
**Developer:** [https://huggingface.co/microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
|
210 |
|
211 |
+
**File Size:** 2320 MB
|
212 |
|
213 |
+
**Context Length:** 4096 tokens
|
214 |
|
215 |
**Prompt Format:**
|
216 |
+
|
217 |
+
```
|
218 |
+
<|user|>
|
219 |
+
{{prompt}} <|end|>
|
220 |
+
<|assistant|>
|
221 |
```
|
|
|
|
|
|
|
222 |
|
223 |
+
**Template Name:** PHI3
|
224 |
|
225 |
**Add BOS Token:** Yes
|
226 |
|
|
|
232 |
---
|
233 |
|
234 |
# Yi 6B Chat
|
235 |
+
|
236 |
The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI. Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, For English language capability, the Yi series models ranked 2nd (just behind GPT-4), outperforming other LLMs (such as LLaMA2-chat-70B, Claude 2, and ChatGPT) on the AlpacaEval Leaderboard in Dec 2023. For Chinese language capability, the Yi series models landed in 2nd place (following GPT-4), surpassing other LLMs (such as Baidu ERNIE, Qwen, and Baichuan) on the SuperCLUE in Oct 2023.
|
237 |
|
238 |
**Model Intention:** It's a 6B model and can understand English and Chinese. It's good for QA and Chat
|
239 |
|
240 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/yi-chat-6b.Q4_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/yi-chat-6b.Q4_K_M.gguf?download=true)
|
241 |
|
242 |
**Model Info URL:** [https://huggingface.co/01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat)
|
243 |
|
|
|
252 |
**Context Length:** 4096 tokens
|
253 |
|
254 |
**Prompt Format:**
|
255 |
+
|
256 |
```
|
257 |
<|im_start|>user
|
258 |
<|im_end|>
|
259 |
{{prompt}}
|
260 |
<|im_start|>assistant
|
261 |
|
262 |
+
```
|
263 |
|
264 |
**Template Name:** yi
|
265 |
|
|
|
273 |
---
|
274 |
|
275 |
# Google Gemma 2B
|
276 |
+
|
277 |
Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is named after the Latin gemma, meaning 'precious stone.' The Gemma model weights are supported by developer tools that promote innovation, collaboration, and the responsible use of artificial intelligence (AI).
|
278 |
|
279 |
**Model Intention:** It's a 2B large model for Q&A purpose. But it requires a high-end device to run.
|
280 |
|
281 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/gemma-2b-it-q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/gemma-2b-it-q8_0.gguf?download=true)
|
282 |
|
283 |
**Model Info URL:** [https://huggingface.co/google/gemma-2b](https://huggingface.co/google/gemma-2b)
|
284 |
|
|
|
293 |
**Context Length:** 8192 tokens
|
294 |
|
295 |
**Prompt Format:**
|
296 |
+
|
297 |
```
|
298 |
<bos><start_of_turn>user
|
299 |
{{prompt}}<end_of_turn>
|
300 |
<start_of_turn>model
|
301 |
|
302 |
+
```
|
303 |
|
304 |
**Template Name:** gemma
|
305 |
|
|
|
313 |
---
|
314 |
|
315 |
# StarCoder2 3B
|
316 |
+
|
317 |
StarCoder2-3B model is a 3B parameter model trained on 17 programming languages from The Stack v2, with opt-out requests excluded. The model uses Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and was trained using the Fill-in-the-Middle objective on 3+ trillion tokens
|
318 |
|
319 |
**Model Intention:** The model is good at 17 programming languages. By just start with your codes, the model will finish it.
|
320 |
|
321 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/starcoder2-3b-instruct-gguf_Q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/starcoder2-3b-instruct-gguf_Q8_0.gguf?download=true)
|
322 |
|
323 |
**Model Info URL:** [https://huggingface.co/bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b)
|
324 |
|
|
|
333 |
**Context Length:** 16384 tokens
|
334 |
|
335 |
**Prompt Format:**
|
336 |
+
|
337 |
```
|
338 |
{{prompt}}
|
339 |
|
340 |
+
```
|
341 |
|
342 |
**Template Name:** starcoder
|
343 |
|
|
|
351 |
---
|
352 |
|
353 |
# Chinese Tiny LLM 2B
|
354 |
+
|
355 |
Chinese Tiny LLM 2B 是首个以中文为中心的大型语言模型,主要在中文语料库上进行预训练和微调,提供了对潜在偏见、中文语言能力和多语言适应性的重要洞见。
|
356 |
|
357 |
**Model Intention:** 这是一个参数规模2B的中文模型,具有很好的中文理解和应答能力
|
358 |
|
359 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/chinese-tiny-llm-2b-Q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/chinese-tiny-llm-2b-Q8_0.gguf?download=true)
|
360 |
|
361 |
**Model Info URL:** [https://chinese-tiny-llm.github.io/](https://chinese-tiny-llm.github.io/)
|
362 |
|
|
|
371 |
**Context Length:** 4096 tokens
|
372 |
|
373 |
**Prompt Format:**
|
374 |
+
|
375 |
```
|
376 |
<|im_start|>user
|
377 |
{{prompt}}
|
378 |
<|im_end|>
|
379 |
<|im_start|>assistant
|
380 |
|
381 |
+
```
|
382 |
|
383 |
**Template Name:** chatml
|
384 |
|
|
|
392 |
---
|
393 |
|
394 |
# Qwen1.5 4B Chat
|
395 |
+
|
396 |
Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group. It supports both Chinese and English. 通义千问是阿里巴巴公司开发的大大预言模型,支持中英文双语。
|
397 |
|
398 |
**Model Intention:** It's one of the best LLM that supports Chinese and English. 这是支持中英双语的最佳的大语言模型之一。
|
399 |
|
400 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/qwen1_5-4b-chat-q4_k_m.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/qwen1_5-4b-chat-q4_k_m.gguf?download=true)
|
401 |
|
402 |
**Model Info URL:** [https://huggingface.co/Qwen/Qwen1.5-4B-Chat-GGUF](https://huggingface.co/Qwen/Qwen1.5-4B-Chat-GGUF)
|
403 |
|
|
|
412 |
**Context Length:** 32768 tokens
|
413 |
|
414 |
**Prompt Format:**
|
415 |
+
|
416 |
```
|
417 |
<|im_start|>user
|
418 |
{{prompt}}
|
419 |
<|im_end|>
|
420 |
<|im_start|>assistant
|
421 |
|
422 |
+
```
|
423 |
|
424 |
**Template Name:** chatml
|
425 |
|
|
|
433 |
---
|
434 |
|
435 |
# Dophin 2.8 Mistralv02 7B
|
436 |
+
|
437 |
This model is based on Mistral-7b-v0.2 with 16k context lengths. It's a uncensored model and supports a variety of instruction, conversational, and coding skills.
|
438 |
|
439 |
**Model Intention:** It's a uncensored and good skilled English modal best for high performance iPhone, iPad & Mac
|
440 |
|
441 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/dolphin-2.8-mistral-7b-v02-Q2_K.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/dolphin-2.8-mistral-7b-v02-Q2_K.gguf?download=true)
|
442 |
|
443 |
**Model Info URL:** [https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
|
444 |
|
|
|
453 |
**Context Length:** 32768 tokens
|
454 |
|
455 |
**Prompt Format:**
|
456 |
+
|
457 |
```
|
458 |
<s><|im_start|>user
|
459 |
{{prompt}}
|
460 |
<|im_end|>
|
461 |
<|im_start|>assistant
|
462 |
|
463 |
+
```
|
464 |
|
465 |
**Template Name:** chatml
|
466 |
|
|
|
474 |
---
|
475 |
|
476 |
# WizardLM-2 7B
|
477 |
+
|
478 |
The WizardLM-2 is one of the next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent.
|
479 |
|
480 |
**Model Intention:** It's a state-of-the-art large language model with improved performance on complex chat, multilingual, reasoning and agent.
|
481 |
|
482 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/WizardLM-2-7B.Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/WizardLM-2-7B.Q3_K_M.gguf?download=true)
|
483 |
|
484 |
**Model Info URL:** [https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF)
|
485 |
|
|
|
494 |
**Context Length:** 32768 tokens
|
495 |
|
496 |
**Prompt Format:**
|
497 |
+
|
498 |
```
|
499 |
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
|
500 |
USER: {{prompt}}
|
501 |
ASSISTANT:
|
502 |
+
```
|
503 |
|
504 |
**Template Name:** chatml
|
505 |
|
|
|
507 |
|
508 |
**Add EOS Token:** No
|
509 |
|
510 |
+
**Parse Special Tokens:** Yes
|
|
|
|
|
|