--- base_model: - princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2 - Sao10K/L3-8B-Stheno-v3.2 - Hastagaras/Jamet-8B-L3-MK.V-Blackroot - Nitral-AI/Hathor_Tahsin-L3-8B-v0.85 - Sao10K/L3-8B-Niitama-v1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2](https://huggingface.co./princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2) as a base. ### Models Merged The following models were included in the merge: * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co./Sao10K/L3-8B-Stheno-v3.2) * [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co./Hastagaras/Jamet-8B-L3-MK.V-Blackroot) * [Nitral-AI/Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co./Nitral-AI/Hathor_Tahsin-L3-8B-v0.85) * [Sao10K/L3-8B-Niitama-v1](https://huggingface.co./Sao10K/L3-8B-Niitama-v1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Sao10K/L3-8B-Stheno-v3.2 - model: Sao10K/L3-8B-Niitama-v1 - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85 - model: princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2 merge_method: model_stock base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2 dtype: bfloat16 ```