--- base_model: mistralai/Mistral-Nemo-Base-2407 license: apache-2.0 datasets: - BeaverAI/Nemo-Inst-Tune-ds language: - en library_name: transformers --- 8bpw h8 exl2 quant of (https://huggingface.co./BeaverAI/mistral-doryV2-12b) # Dory 12b (v2) (redone) redone instruct finetune of mistral nemo 12b's base. *not* (E)RP-focused, leave that to drummer. ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/BiBtgV_WEIha72WqETWfk.gif) thanks to twisted again for the compute :3 ## Prompting alpaca-like: ``` ### System: [Optional system prompt] ### Instruction: [Query] ### Response: [Response] ### Instruction: [...] ``` ## Training details Rank 64 QDoRA, trained on the following data mix: - All of [kalomaze/Opus_Instruct_3k](https://huggingface.co./datasets/kalomaze/Opus_Instruct_3k) - All conversations with a reward model rating above 5 in [Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered](https://huggingface.co./datasets/Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered) - 50k of [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co./datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned) - All stories above 4.7 rating and published before 2020 in [Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered](https://huggingface.co./datasets/Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered)