|
--- |
|
base_model: |
|
- ClaudioItaly/Evolutionstory-7B-v2.2 |
|
- flammenai/flammen15-gutenberg-DPO-v1-7B |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the SLERP merge method. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [ClaudioItaly/Evolutionstory-7B-v2.2](https://huggingface.co./ClaudioItaly/Evolutionstory-7B-v2.2) |
|
* [flammenai/flammen15-gutenberg-DPO-v1-7B](https://huggingface.co./flammenai/flammen15-gutenberg-DPO-v1-7B) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: flammenai/flammen15-gutenberg-DPO-v1-7B |
|
- model: ClaudioItaly/Evolutionstory-7B-v2.2 |
|
merge_method: slerp |
|
tokenizer_merge_method: slerp |
|
tokenizer_parameters: |
|
t: 0.3 # Dà più peso al tokenizer di Evolutionstory-7B-v2.2 |
|
base_model: ClaudioItaly/Evolutionstory-7B-v2.2 |
|
dtype: bfloat16 |
|
parameters: |
|
t: [0, 0.2, 0.4, 0.5, 0.4, 0.2, 0] # Curva che favorisce leggermente Evolutionstory-7B-v2.2 |
|
temp: 1.3 # Temperatura per smoothare il merge |
|
density: # Density merging per bilanciare le caratteristiche dei due modelli |
|
- threshold: 0.1 |
|
t: 0.7 |
|
- threshold: 0.5 |
|
t: 0.5 |
|
- threshold: 0.9 |
|
t: 0.3 |
|
|
|
``` |
|
|