metadata
base_model:
- inflatebot/thorn-0.5
- inflatebot/thorn-0.35
- inflatebot/thorn-0.55
- inflatebot/thorn-0.45
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
NOTE: If you are getting phrase repetition or nonsense outputs with SillyTavern, make sure that "Include names" is disabled under Advanced Formatting. Nemo models tend to exhibit these issues when this is enabled.
A re-application of the Helium-3 process to Mistral Nemo analogues. Experimental (as you can tell by it having a revision number, I'll be playing with this more in coming time.) Based ultimately on Magnum-12B-V2 and MN-12B-Rosier-v1.
Quants available from Reiterate3680
Special thanks to Fizz and Toasty Pigeon.
Use ChatML formatting, as Rosier was trained from base (so no instruct format) and Magnum V2 was trained on ChatML!
Merge Method
This model was merged using the Model Stock merge method using inflatebot/thorn-0.35 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: inflatebot/thorn-0.5
- model: inflatebot/thorn-0.45
- model: inflatebot/thorn-0.55
merge_method: model_stock
base_model: inflatebot/thorn-0.35
dtype: bfloat16