MN-12B-Estrella-v1 / README.md
v000000's picture
Update README.md
b73e607 verified
---
base_model:
- v000000/MN-12B-Part1
- v000000/MN-12B-Part2
library_name: transformers
tags:
- mergekit
- merge
- mistral
license: cc-by-nc-4.0
---
<!DOCTYPE html>
<style>
h1 {
color: #327fa8; /* Red color */
font-size: 1.25em; /* Larger font size */
text-align: left; /* Center alignment */
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); /* Shadow effect */
background: linear-gradient(90deg, #327fa8, #fba8a8); /* Gradient background */
-webkit-background-clip: text; /* Clipping the background to text */
-webkit-text-fill-color: transparent; /* Making the text transparent */
}
</style>
<html lang="en">
<head>
</head>
<body>
> [!WARNING]
> **Temperature:**<br>
> Mistral Nemo likes low temperature between 0.3-0.5
Mistral-Nemo-2407-12B-Estrella-v1
---------------------------------------------------------------------
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/MyveknmJhuj43YrukIDAU.png)
RP Model. Seems coherent and concise but also creative. Big merge using new DELLA technique.
<b>Prompt Format: Seems best with "Mistral Instruct" but ChatML might also work.</b>
```
[INST] System Message [/INST]
[INST] Name: Let's get started. Please respond based on the information and instructions provided above. [/INST]
<s>[INST] Name: What is your favourite condiment? [/INST]
AssistantName: Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s>
[INST] Name: Do you have mayonnaise recipes? [/INST]
```
# <h1> Thanks mradermacher for the quants</h1>
* [GGUFs static](https://huggingface.co./mradermacher/MN-12B-Estrella-v1-GGUF)
* [GGUFs imatrix](https://huggingface.co./mradermacher/MN-12B-Estrella-v1-i1-GGUF)
----------------------------------------------------------------------
## merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged with a multi-step method using the <b>DELLA</b>, <b>DELLA_LINEAR</b> and <b>SLERP</b> merge algorithms.
### Models Merged
The following models were included in the merge:
* [nothingiisreal/MN-12B-Celeste-V1.9](https://huggingface.co./nothingiisreal/MN-12B-Celeste-V1.9)
* [shuttleai/shuttle-2.5-mini](https://huggingface.co./shuttleai/shuttle-2.5-mini)
* [anthracite-org/magnum-12b-v2](https://huggingface.co./anthracite-org/magnum-12b-v2)
* [Sao10K/MN-12B-Lyra-v1](https://huggingface.co./Sao10K/MN-12B-Lyra-v1)
* [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co./unsloth/Mistral-Nemo-Instruct-2407)
* [NeverSleep/Lumimaid-v0.2-12B](https://huggingface.co./NeverSleep/Lumimaid-v0.2-12B)
* [UsernameJustAnother/Nemo-12B-Marlin-v5](https://huggingface.co./UsernameJustAnother/Nemo-12B-Marlin-v5)
* [BeaverAI/mistral-doryV2-12b](https://huggingface.co./BeaverAI/mistral-doryV2-12b)
* [invisietch/Atlantis-v0.1-12B](https://huggingface.co./invisietch/Atlantis-v0.1-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
#Step 1 (Part1)
models:
- model: Sao10K/MN-12B-Lyra-v1
parameters:
weight: 0.15
density: 0.77
- model: shuttleai/shuttle-2.5-mini
parameters:
weight: 0.20
density: 0.78
- model: anthracite-org/magnum-12b-v2
parameters:
weight: 0.35
density: 0.85
- model: nothingiisreal/MN-12B-Celeste-V1.9
parameters:
weight: 0.55
density: 0.90
merge_method: della
base_model: Sao10K/MN-12B-Lyra-v1
parameters:
int8_mask: true
epsilon: 0.05
lambda: 1
dtype: bfloat16
#Step 2 (Part2)
models:
- model: BeaverAI/mistral-doryV2-12b
parameters:
weight: 0.10
density: 0.4
- model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
weight: 0.20
density: 0.4
- model: UsernameJustAnother/Nemo-12B-Marlin-v5
parameters:
weight: 0.25
density: 0.5
- model: invisietch/Atlantis-v0.1-12B
parameters:
weight: 0.3
density: 0.5
- model: NeverSleep/Lumimaid-v0.2-12B
parameters:
weight: 0.4
density: 0.8
merge_method: della_linear
base_model: anthracite-org/magnum-12b-v2
parameters:
int8_mask: true
epsilon: 0.05
lambda: 1
dtype: bfloat16
#Step 3 (Estrella)
slices:
- sources:
- model: v000000/MN-12B-Part2
layer_range: [0, 40]
- model: v000000/MN-12B-Part1
layer_range: [0, 40]
merge_method: slerp
base_model: v000000/MN-12B-Part1
parameters: #smooth gradient prio part1
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 0.6, 0.1, 0.6, 0.3, 0.8, 0.5]
- filter: mlp
value: [0, 0.5, 0.4, 0.3, 0, 0.3, 0.4, 0.7, 0.2, 0.5]
- value: 0.5
dtype: bfloat16
```
</body>