MythoMist-7b / README.md
Gryphe's picture
Update README.md
1fbc79a
|
raw
history blame
No virus
2 kB
metadata
license: other
language:
  - en

MythoMist 7b is an experimental Mistral-based merge based on my latest (still in development) algorithm, which actively benchmarks the model as it's being built in pursuit of a goal set by the user.

The primary purpose for MythoMist was to reduce usage of the word anticipation, ministrations and other variations we've come to associate negatively with ChatGPT roleplaying data.

I am currently in the process of cleaning up the code before publishing it, much like I did with my earlier gradient tensor script.

Final merge composition

After processing 12 models my algorithm ended up with the following (approximated) final composition, which are spread almost randomly throughout the final model due to the way my new method works.

Model Contribution
Neural-chat-7b-v3-1 26%
Synatra-7B-v0.3-RP 22%
Airoboros-m-7b-3.1.2 10%
Toppy-M-7B 10%
Zephyr-7b-beta 7%
Nous-Capybara-7B-V1.9 5%
OpenHermes-2.5-Mistral-7B 5%
Dolphin-2.2.1-mistral-7b 4%
Noromaid-7b-v0.1.1 4%
SynthIA-7B-v1.3 3%
Mistral-7B-v0.1 2%
Openchat_3.5 2%

This new process only decides on the model's layers, not the singular lm_head and embed_tokens layers which influence much of the model's output. I ran a seperate script for that, picking the singular tensors that create the longest responses, which settled on Toppy-M-7B.

Prompt Format

Due to the wide variation in prompt formats used in this merge I (for now) recommend using Alpaca as the prompt template for compatibility reasons:

### Instruction:
Your instruction or question here.

### Response:

license: other