language: | |
- en | |
license: apache-2.0 | |
tags: | |
- mlx | |
base_model: alpindale/Mistral-7B-v0.2-hf | |
datasets: | |
- cognitivecomputations/dolphin | |
- cognitivecomputations/dolphin-coder | |
- cognitivecomputations/samantha-data | |
- jondurbin/airoboros-2.2.1 | |
- teknium/openhermes-2.5 | |
- m-a-p/Code-Feedback | |
- m-a-p/CodeFeedback-Filtered-Instruction | |
model-index: | |
- name: dolphin-2.8-mistral-7b-v02 | |
results: | |
- task: | |
type: text-generation | |
dataset: | |
name: HumanEval | |
type: openai_humaneval | |
metrics: | |
- type: pass@1 | |
value: 0.469 | |
name: pass@1 | |
verified: false | |
# mlx-community/dolphin-2.8-mistral-7b-v02-8bit | |
This model was converted to MLX format from [`cognitivecomputations/dolphin-2.8-mistral-7b-v02`]() using mlx-lm version **0.7.0**. | |
Refer to the [original model card](https://huggingface.co./cognitivecomputations/dolphin-2.8-mistral-7b-v02) for more details on the model. | |
## Use with mlx | |
```bash | |
pip install mlx-lm | |
``` | |
```python | |
from mlx_lm import load, generate | |
model, tokenizer = load("mlx-community/dolphin-2.8-mistral-7b-v02-8bit") | |
response = generate(model, tokenizer, prompt="hello", verbose=True) | |
``` | |