Llama-3.2-3B-Stock / README.md
bunnycore's picture
Upload folder using huggingface_hub
6e83e72 verified
|
raw
history blame
1.94 kB
---
base_model:
- KingNish/Reasoning-Llama-3b-v0.1
- chuanli11/Llama-3.2-3B-Instruct-uncensored
- alpindale/Llama-3.2-3B
- alpindale/Llama-3.2-3B-Instruct
- Hastagaras/L3.2-JametMini-3B-MK.III
- bunnycore/Llama-3.2-3B-TitanFusion-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [alpindale/Llama-3.2-3B](https://huggingface.co./alpindale/Llama-3.2-3B) as a base.
### Models Merged
The following models were included in the merge:
* [KingNish/Reasoning-Llama-3b-v0.1](https://huggingface.co./KingNish/Reasoning-Llama-3b-v0.1)
* [chuanli11/Llama-3.2-3B-Instruct-uncensored](https://huggingface.co./chuanli11/Llama-3.2-3B-Instruct-uncensored)
* [alpindale/Llama-3.2-3B-Instruct](https://huggingface.co./alpindale/Llama-3.2-3B-Instruct)
* [Hastagaras/L3.2-JametMini-3B-MK.III](https://huggingface.co./Hastagaras/L3.2-JametMini-3B-MK.III)
* [bunnycore/Llama-3.2-3B-TitanFusion-v2](https://huggingface.co./bunnycore/Llama-3.2-3B-TitanFusion-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Hastagaras/L3.2-JametMini-3B-MK.III
parameters:
weight: 1
density: 1
- model: KingNish/Reasoning-Llama-3b-v0.1
parameters:
weight: 1
density: 1
- model: bunnycore/Llama-3.2-3B-TitanFusion-v2
parameters:
weight: 1
density: 1
- model: chuanli11/Llama-3.2-3B-Instruct-uncensored
parameters:
weight: 1
density: 1
- model: alpindale/Llama-3.2-3B-Instruct
parameters:
weight: 1
density: 1
merge_method: model_stock
base_model: alpindale/Llama-3.2-3B
parameters:
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
```