|
--- |
|
license: apache-2.0 |
|
base_model: cognitivecomputations/dolphin-2.9.3-mistral-7B-32K |
|
tags: |
|
- generated_from_trainer |
|
- axolotl |
|
datasets: |
|
- cognitivecomputations/Dolphin-2.9 |
|
- teknium/OpenHermes-2.5 |
|
- m-a-p/CodeFeedback-Filtered-Instruction |
|
- cognitivecomputations/dolphin-coder |
|
- cognitivecomputations/samantha-data |
|
- microsoft/orca-math-word-problems-200k |
|
- Locutusque/function-calling-chatml |
|
- internlm/Agent-FLAN |
|
--- |
|
|
|
_Original Model Card_ |
|
|
|
# Dolphin 2.9.3 Mistral 7b v0.3 32k 🐬 |
|
|
|
Curated and trained by Eric Hartford and Cognitive Computations |
|
|
|
[![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) |
|
Discord: https://discord.gg/cognitivecomputations |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> |
|
|
|
This model is based on mistralai/Mistral-7B-v0.3, and is governed by the apache 2.0 license. |
|
|
|
The base model has 32k context, and our finetuning took place with 8192 sequence length. |
|
|
|
Dolphin 2.9.3 uses ChatML prompt template format. |
|
|
|
example: |
|
|
|
``` |
|
<|im_start|>system |
|
You are Dolphin, a helpful AI assistant.<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant |
|
|
|
``` |
|
|
|
## Usage |
|
|
|
```bash |
|
ollama run CognitiveComputations/dolphin-mistral-32k:7b-v2.9.3-q4_0 |
|
``` |
|
|
|
## Supported Tags |
|
|
|
+ dolphin-mistral-32k:7b-v2.9.3-q2_k |
|
+ dolphin-mistral-32k:7b-v2.9.3-q3_k |
|
+ dolphin-mistral-32k:7b-v2.9.3-q4_0 |
|
+ dolphin-mistral-32k:7b-v2.9.3-q4_k_m |
|
+ dolphin-mistral-32k:7b-v2.9.3-q4_k_s |
|
+ dolphin-mistral-32k:7b-v2.9.3-q5_0 |
|
+ dolphin-mistral-32k:7b-v2.9.3-q5_k_m |
|
+ dolphin-mistral-32k:7b-v2.9.3-q5_k_s |
|
+ dolphin-mistral-32k:7b-v2.9.3-q6_k |
|
+ dolphin-mistral-32k:7b-v2.9.3-q8_0 |