File size: 1,186 Bytes
f6cb0df eb75019 5f27ba0 4ede2ef 5f27ba0 b927ff2 f6cb0df ca6c16a b627f08 f6cb0df 493ef95 f6cb0df b627f08 f6cb0df b627f08 f6cb0df 5f27ba0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- dpo
- uncensored
- roleplay
- fine-tune
base_model: MTSAIR/multi_verse_model
library_name: transformers
datasets:
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- ResplendentAI/Luna_Alpaca
- unalignment/toxic-dpo-v0.2
- kira/math-dpo
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
---
# 💫 Pulsar_7B
Pulsar_7B is a fine-tune of [MTSAIR/multi_verse_model](https://huggingface.co./MTSAIR/multi_verse_model), trained on these datasets:
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- ResplendentAI/Luna_Alpaca
- unalignment/toxic-dpo-v0.2
- kira/math-dpo
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
## Quantizations
Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co./mradermacher/Pulsar_7B-GGUF).
---
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |