--- pipeline_tag: text-generation library_name: transformers language: - en license: llama3 tags: - mergekit - merge - multi-step merge - not-for-all-audiences - nsfw - rp - roleplay - role-play - summarization - emotion classification base_model: - nothingiisreal/L3-8B-Celeste-v1 - Nitral-AI/Hathor_Tahsin-L3-8B-v0.85 - Sao10K/L3-8B-Stheno-v3.2 - ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B - Sao10K/L3-8B-Lunaris-v1 - turboderp/llama3-turbcat-instruct-8b - ChaoticNeutrals/Domain-Fusion-L3-8B - migtissera/Llama-3-8B-Synthia-v3.5 - TheDrummer/Llama-3SOME-8B-v2 - ChaoticNeutrals/Hathor_RP-v.01-L3-8B - TheSkullery/llama-3-cat-8b-instruct-v1 - FPHam/L3-8B-Everything-COT - Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged - OEvortex/Emotional-llama-8B - lighteternal/Llama3-merge-biomed-8b - Casual-Autopsy/Llama3-merge-psychotherapy-8b - Sao10K/L3-8B-Tamamo-v1 - ResplendentAI/Nymph_8B - ChaoticNeutrals/T-900-8B - Sao10K/L3-8B-Niitama-v1 - bluuwhale/L3-SthenoMaidBlackroot-8B-V1 - Hastagaras/Jamet-8B-L3-MK.V-Blackroot - Hastagaras/Halu-8B-Llama3-Blackroot - crestf411/L3-8B-sunfall-v0.4-stheno-v3.2 --- | | |:---:| | Image generated by [mayonays_on_toast](https://civitai.com/user/mayonays_on_toast) - [Sauce](https://civitai.com/images/10153472) | *** *** *** # L3-Super-Nova-RP-8B This is a role-playing model designed with the goal of good creativity and intelligence to improve advance role-playing experiences. The aim of L3-Super-Nova-RP-8B is to be good at Chain-of-Thoughts, summarizing information, and recognizing emotions. It also includes data about the human body and mind in an attempt to enhance understanding and interaction within role-playing scenarios. The model was developed using various methods in multiple merging steps. To boost creativity, it used techniques to strengthen and adjust its output which was paried with the newly released merge method. All merge calculations were done in float32 format and then converted to the usual bfloat16 during merging. *** *** ## Presets *** ### Text Gen The current good starting preset for this model: [***Nova***](https://huggingface.co./Casual-Autopsy/L3-Super-Nova-RP-8B/tree/main/ST/TextGen%20Preset)(Ooba Only) *** ### Context/Instruct [Virt-io's SillyTavern Presets](https://huggingface.co./Virt-io/SillyTavern-Presets) work really well with this. *** *** ## Usage Info Some of the **INT** models were chosen with some of SillyTavern's features in mind, such as emotion based sprites, dynamic music, and pretty much any feature, extension, or STscript that uses sumarization. With that said, it's recommended to use SillyTavern as your front-end. While not required, I'd recommend building the story string prompt with Lorebooks rather than using the Advance Formatting menu. The only thing you really need in the Story String prompt within Advance Formatting is the system prompt. Doing it this way tends to keep the character more consistent as the RP goes on as all character card info is locked to a certain depth rather than getting further and further away within the context. *** *** ## Quants GGUF: - [Static GGUFs](https://huggingface.co./mradermacher/L3-Super-Nova-RP-8B-GGUF) by mradermacher - [Imatrix GGUFs](https://huggingface.co./mradermacher/L3-Super-Nova-RP-8B-i1-GGUF) by mradermacher Exl2: - [8.0bpw-h8 Exl2](https://huggingface.co./Slvcxc/L3-Super-Nova-RP-8B-8.0bpw-h8-exl2) by Slvcxc *** *** ## Merge Info The merge methods used were **Ties**, **Dare Ties**, **Breadcrumbs Ties**, **SLERP**, and **DELLA**. The model was finished off with both **Merge Densification**, and **Negative Weighting** techniques to boost creativity. All merging steps had the merge calculations done in **float32** and were output as **bfloat16**. *** ### Models Merged The following models were used to make this merge: * [nothingiisreal/L3-8B-Celeste-v1](https://huggingface.co./nothingiisreal/L3-8B-Celeste-v1) * [Nitral-AI/Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co./Nitral-AI/Hathor_Tahsin-L3-8B-v0.85) * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co./Sao10K/L3-8B-Stheno-v3.2) * [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co./ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B) * [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co./Sao10K/L3-8B-Lunaris-v1) * [turboderp/llama3-turbcat-instruct-8b](https://huggingface.co./turboderp/llama3-turbcat-instruct-8b) * [ChaoticNeutrals/Domain-Fusion-L3-8B](https://huggingface.co./ChaoticNeutrals/Domain-Fusion-L3-8B) * [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co./migtissera/Llama-3-8B-Synthia-v3.5) * [TheDrummer/Llama-3SOME-8B-v2](https://huggingface.co./TheDrummer/Llama-3SOME-8B-v2) * [ChaoticNeutrals/Hathor_RP-v.01-L3-8B](https://huggingface.co./ChaoticNeutrals/Hathor_RP-v.01-L3-8B) * [TheSkullery/llama-3-cat-8b-instruct-v1](https://huggingface.co./TheSkullery/llama-3-cat-8b-instruct-v1) * [FPHam/L3-8B-Everything-COT](https://huggingface.co./FPHam/L3-8B-Everything-COT) * [Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged](https://huggingface.co./Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged) * [OEvortex/Emotional-llama-8B](https://huggingface.co./OEvortex/Emotional-llama-8B) * [lighteternal/Llama3-merge-biomed-8b](https://huggingface.co./lighteternal/Llama3-merge-biomed-8b) * [Casual-Autopsy/Llama3-merge-psychotherapy-8b](https://huggingface.co./Casual-Autopsy/Llama3-merge-psychotherapy-8b) * [Sao10K/L3-8B-Tamamo-v1](https://huggingface.co./Sao10K/L3-8B-Tamamo-v1) * [ResplendentAI/Nymph_8B](https://huggingface.co./ResplendentAI/Nymph_8B) * [ChaoticNeutrals/T-900-8B](https://huggingface.co./ChaoticNeutrals/T-900-8B) * [Sao10K/L3-8B-Niitama-v1](https://huggingface.co./Sao10K/L3-8B-Niitama-v1) * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co./bluuwhale/L3-SthenoMaidBlackroot-8B-V1) * [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co./Hastagaras/Jamet-8B-L3-MK.V-Blackroot) * [Hastagaras/Halu-8B-Llama3-Blackroot](https://huggingface.co./Hastagaras/Halu-8B-Llama3-Blackroot) * [crestf411/L3-8B-sunfall-v0.4-stheno-v3.2](https://huggingface.co./crestf411/L3-8B-sunfall-v0.4-stheno-v3.2) *** *** ## Evaluation Results *** ### Open LLM Leaderboard [LB Link](https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard) **Explaination for AI RP newbies:** IFEval is the most important evaluation for RP AIs as it determines how well it can follow OOC, Lorebooks, and most importantly character cards. The rest don't matter. At least not nearly as much as IFEval. |Metric | Value| |:------------------|------:| |Avg. |N/A| |IFEval (0-Shot) |N/A| |BBH (3-Shot) |N/A| |MATH Lvl 5 (4-Shot)|N/A| |GPQA (0-shot) |N/A| |MuSR (0-shot) |N/A| |MMLU-PRO (5-shot) |N/A| *** ### UGI Leaderboard [LB Link](https://huggingface.co./spaces/DontPlanToEnd/UGI-Leaderboard) Information about the metrics can be found at the bottom of the [UGI Leaderboard](https://huggingface.co./spaces/DontPlanToEnd/UGI-Leaderboard) in the respective tabs. |Metric(UGI-Leaderboard) | Value | Value | Metric(Writing Style)| |:------------------------|:-----:|:-----:|----------------------:| |UGI(Avg.) |23.56 |0.199 |RegV1 | |W/10 |5.8 |0.218 |RegV2 | |Unruly |22.5 |0.15 |MyScore | |Internet |11.8 |8.34 |ASSS | |Stats |18.7 |10.26 |SMOG | |Writing |31.5 |1.76 |Yule | |PolContro |33.3 | | | *** *** ## Secret Sauce The following YAML configs were used to make this merge. *** ### Super-Nova-CRE_pt.1 ```yaml models: - model: nothingiisreal/L3-8B-Celeste-v1 - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85 parameters: density: [0.35, 0.45, 0.5, 0.55, 0.65, 0.55, 0.5, 0.45, 0.35] weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495] - model: Sao10K/L3-8B-Stheno-v3.2 parameters: density: [0.65, 0.55, 0.5, 0.45, 0.35, 0.45, 0.5, 0.55, 0.65] weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165] merge_method: dare_ties base_model: nothingiisreal/L3-8B-Celeste-v1 parameters: normalize: false int8_mask: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-CRE_pt.2 ```yaml models: - model: nothingiisreal/L3-8B-Celeste-v1 - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B parameters: density: [0.35, 0.45, 0.5, 0.55, 0.65, 0.55, 0.5, 0.45, 0.35] weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165] - model: Sao10K/L3-8B-Lunaris-v1 parameters: density: [0.65, 0.55, 0.5, 0.45, 0.35, 0.45, 0.5, 0.55, 0.65] weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495] merge_method: dare_ties base_model: nothingiisreal/L3-8B-Celeste-v1 parameters: normalize: false int8_mask: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-UNC_pt.1 ```yaml models: - model: turboderp/llama3-turbcat-instruct-8b - model: ChaoticNeutrals/Domain-Fusion-L3-8B parameters: density: 0.5 weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495] - model: migtissera/Llama-3-8B-Synthia-v3.5 parameters: density: 0.5 weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165] merge_method: dare_ties base_model: turboderp/llama3-turbcat-instruct-8b parameters: normalize: false int8_mask: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-UNC_pt.2 ```yaml models: - model: turboderp/llama3-turbcat-instruct-8b - model: TheDrummer/Llama-3SOME-8B-v2 parameters: density: 0.5 weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165] - model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B parameters: density: 0.5 weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495] merge_method: dare_ties base_model: turboderp/llama3-turbcat-instruct-8b parameters: normalize: false int8_mask: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-INT_pt.1 ```yaml models: - model: TheSkullery/llama-3-cat-8b-instruct-v1 - model: FPHam/L3-8B-Everything-COT parameters: density: 0.5 weight: [0.139, 0.139, 0.208, 0.139, 0.208] - model: Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged parameters: density: 0.5 weight: [0.139, 0.208, 0.139, 0.208, 0.139] - model: OEvortex/Emotional-llama-8B parameters: density: 0.5 weight: [0.208, 0.139, 0.208, 0.139, 0.139] - model: lighteternal/Llama3-merge-biomed-8b parameters: density: 0.5 weight: [0.208, 0.139, 0.139, 0.139, 0.208] - model: Casual-Autopsy/Llama3-merge-psychotherapy-8b parameters: density: 0.5 weight: [0.139, 0.208, 0.139, 0.208, 0.139] merge_method: ties base_model: TheSkullery/llama-3-cat-8b-instruct-v1 parameters: normalize: false int8_mask: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-INT_pt.2 ```yaml models: - model: TheSkullery/llama-3-cat-8b-instruct-v1 - model: FPHam/L3-8B-Everything-COT parameters: density: 0.9 gamma: 0.01 weight: [0.139, 0.208, 0.208, 0.139, 0.139] - model: Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged parameters: density: 0.9 gamma: 0.01 weight: [0.208, 0.139, 0.139, 0.139, 0.208] - model: OEvortex/Emotional-llama-8B parameters: density: 0.9 gamma: 0.01 weight: [0.139, 0.139, 0.208, 0.208, 0.139] - model: lighteternal/Llama3-merge-biomed-8b parameters: density: 0.9 gamma: 0.01 weight: [0.139, 0.208, 0.139, 0.208, 0.139] - model: Casual-Autopsy/Llama3-merge-psychotherapy-8b parameters: density: 0.9 gamma: 0.01 weight: [0.208, 0.139, 0.139, 0.139, 0.208] merge_method: breadcrumbs_ties base_model: TheSkullery/llama-3-cat-8b-instruct-v1 parameters: normalize: false int8_mask: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-CRE ```yaml models: - model: Casual-Autopsy/Super-Nova-CRE_pt.1 - model: Casual-Autopsy/Super-Nova-CRE_pt.2 merge_method: slerp base_model: Casual-Autopsy/Super-Nova-CRE_pt.1 parameters: t: - filter: self_attn value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5] - filter: mlp value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5] - value: 0.5 embed_slerp: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-UNC ```yaml models: - model: Casual-Autopsy/Super-Nova-UNC_pt.1 - model: Casual-Autopsy/Super-Nova-UNC_pt.2 merge_method: slerp base_model: Casual-Autopsy/Super-Nova-UNC_pt.1 parameters: t: - value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5] embed_slerp: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-INT ```yaml models: - model: Casual-Autopsy/Super-Nova-INT_pt.1 - model: Casual-Autopsy/Super-Nova-INT_pt.2 merge_method: slerp base_model: Casual-Autopsy/Super-Nova-INT_pt.1 parameters: t: - value: 0.5 embed_slerp: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-RP_stp.1 ```yaml models: - model: Casual-Autopsy/Super-Nova-CRE - model: asual-Autopsy/Super-Nova-UNC merge_method: slerp base_model: Casual-Autopsy/Super-Nova-CRE parameters: t: - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7] embed_slerp: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-RP_stp.2 ```yaml models: - model: Casual-Autopsy/Super-Nova-RP_stp.1 - model: Casual-Autopsy/Super-Nova-INT merge_method: slerp base_model: Casual-Autopsy/Super-Nova-RP_stp.1 parameters: t: - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1] embed_slerp: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-RP_pt.1 ```yaml models: - model: Casual-Autopsy/Super-Nova-RP_stp.2 - model: Sao10K/L3-8B-Tamamo-v1 parameters: density: [0.4, 0.6, 0.5, 0.6, 0.4] epsilon: [0.15, 0.15, 0.25, 0.15, 0.15] lambda: 0.85 weight: [-0.01523, 0.01768, -0.01384, 0.01835, -0.01247] - model: ResplendentAI/Nymph_8B parameters: density: [0.65, 0.35, 0.5, 0.35, 0.65] epsilon: [0.1, 0.1, 0.25, 0.1, 0.1] lambda: 0.85 weight: [0.01823, -0.01647, 0.01422, -0.01975, 0.01128] - model: ChaoticNeutrals/T-900-8B parameters: density: [0.35, 0.65, 0.5, 0.65, 0.35] epsilon: [0.1, 0.1, 0.25, 0.1, 0.1] lambda: 0.85 weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458] - model: Sao10K/L3-8B-Niitama-v1 parameters: density: [0.6, 0.4, 0.5, 0.4, 0.6] epsilon: [0.15, 0.15, 0.25, 0.15, 0.15] lambda: 0.85 weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421] merge_method: della base_model: Casual-Autopsy/Super-Nova-RP_stp.2 parameters: normalize: false int8_mask: true dtype: float32 out_dtype: bfloat16 ``` *** ### Super-Nova-RP_pt.2 ```yaml models: - model: Casual-Autopsy/Super-Nova-RP_stp.2 - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1 parameters: density: [0.4, 0.6, 0.5, 0.6, 0.4] epsilon: [0.15, 0.15, 0.25, 0.15, 0.15] lambda: 0.85 weight: [-0.01935, 0.01785, -0.01512, 0.01809, -0.01371] - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot parameters: density: [0.65, 0.35, 0.5, 0.35, 0.65] epsilon: [0.1, 0.1, 0.25, 0.1, 0.1] lambda: 0.85 weight: [0.01847, -0.01468, 0.01503, -0.01822, 0.01459] - model: Hastagaras/Halu-8B-Llama3-Blackroot parameters: density: [0.35, 0.65, 0.5, 0.65, 0.35] epsilon: [0.1, 0.1, 0.25, 0.1, 0.1] lambda: 0.85 weight: [-0.01578, 0.01821, -0.01753, 0.01677, -0.01442] - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2 parameters: density: [0.6, 0.5, 0.5, 0.5, 0.6] epsilon: [0.15, 0.15, 0.25, 0.15, 0.15] lambda: 0.85 weight: [0.01667, -0.01740, 0.01560, -0.01564, 0.01315] merge_method: della base_model: Casual-Autopsy/Super-Nova-RP_stp.2 parameters: normalize: false int8_mask: true dtype: float32 out_dtype: bfloat16 ``` *** ### L3-Super-Nova-RP-8B ```yaml models: - model: Casual-Autopsy/Super-Nova-RP_pt.1 - model: Casual-Autopsy/Super-Nova-RP_pt.2 merge_method: slerp base_model: Casual-Autopsy/Super-Nova-RP_pt.1 parameters: t: - value: 0.5 dtype: float32 out_dtype: bfloat16 ```