aashish1904's picture
Upload README.md with huggingface_hub
1d29387 verified
|
raw
history blame
No virus
1.72 kB
---
base_model: mistralai/Mistral-Nemo-Base-2407
license: apache-2.0
datasets:
- BeaverAI/Nemo-Inst-Tune-ds
language:
- en
library_name: transformers
---
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
# QuantFactory/mistral-doryV2-12b-GGUF
This is quantized version of [BeaverAI/mistral-doryV2-12b](https://huggingface.co./BeaverAI/mistral-doryV2-12b) created using llama.cpp
# Original Model Card
# Dory 12b (v2)
(redone) redone instruct finetune of mistral nemo 12b's base. *not* (E)RP-focused, leave that to drummer.
![image/gif](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/BiBtgV_WEIha72WqETWfk.gif)
thanks to twisted again for the compute :3
## Prompting
alpaca-like:
```
### System:
[Optional system prompt]
### Instruction:
[Query]
### Response:
[Response]</s>
### Instruction:
[...]
```
## Training details
Rank 64 QDoRA, trained on the following data mix:
- All of [kalomaze/Opus_Instruct_3k](https://huggingface.co./datasets/kalomaze/Opus_Instruct_3k)
- All conversations with a reward model rating above 5 in [Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered](https://huggingface.co./datasets/Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered)
- 50k of [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co./datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned)
- All stories above 4.7 rating and published before 2020 in [Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered](https://huggingface.co./datasets/Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered)