|
--- |
|
license: mit |
|
language: |
|
- en |
|
base_model: |
|
- unsloth/Phi-3-mini-4k-instruct |
|
datasets: |
|
- cognitivecomputations/Dolphin-2.9 |
|
- teknium/OpenHermes-2.5 |
|
- m-a-p/CodeFeedback-Filtered-Instruction |
|
- cognitivecomputations/dolphin-coder |
|
- cognitivecomputations/samantha-data |
|
- microsoft/orca-math-word-problems-200k |
|
- Locutusque/function-calling-chatml |
|
- internlm/Agent-FLAN |
|
--- |
|
|
|
# Dolphin 2.9.1 Phi-3 Kensho 4.5b 🐬 (Abliterated) |
|
|
|
Curated and trained by Eric Hartford, Lucas Atkins, Fernando Fernandes, and with help from the community of Cognitive Computations. |
|
Uncensored by [FailSpy](https://huggingface.co./failspy) |
|
|
|
[![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) |
|
Discord: https://discord.gg/cognitivecomputations |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> |
|
|
|
Our appreciation for the sponsors of Dolphin 2.9: |
|
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40Snode |
|
|
|
This model utilizes PEFT layer replication at inference time to duplicate layers and increase parameter count. This works with both the merged model that comes stock with this repository, |
|
and the adapter that is attached as well. Performance will be similar with both methods, but VRAM use is considerably less when using the adapter. |
|
This model was initialized using [Unsloth's Mistralfied Phi-3-Instruct-4k](https://huggingface.co./unsloth/Phi-3-mini-4k-instruct). If you choose to use the adapter method, please attach it to their model. |
|
|
|
<img src="https://i.ibb.co/C6sqLBH/Vram-Use.png" width="300"> |
|
|
|
|
|
|
|
This model is based on Phi-3-Mini-Instruct-4k, and is governed by the MIT license in which Microsoft released Phi-3. |
|
|
|
The base model has 4k context, and the qLoRA fine-tuning was with 4k sequence length. |
|
|
|
The model's weights were then adjusted to ablate and inhibit refusals based on the methodology described in [Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' |
|
This effectively uncensors the model whilst minimizing affecting other features in the model. |
|
|
|
It took 2.5 days on 8xL40S node provided by Crusoe Cloud |
|
|
|
This model uses ChatML prompt template format. |
|
|
|
example: |
|
|
|
``` |
|
<|im_start|>system |
|
You are Dolphin, a helpful AI assistant.<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant |
|
|
|
``` |
|
|
|
Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. |
|
We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. |
|
|
|
Dolphin is licensed according to the MIT license. I grant permission for any use, including commercial. Dolphin was trained on data generated from GPT4, among other models. |
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
|
|
|
|
|
|
|