--- base_model: - Etherll/Qwen2.5-7B-della-test - fblgit/cybertron-v4-qw7B-UNAMGS - jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0 - bunnycore/Qwen-2.1-7b-Persona-lora_model - bunnycore/Qwen-2.5-7B-Deep-Stock-v2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [fblgit/cybertron-v4-qw7B-UNAMGS](https://huggingface.co./fblgit/cybertron-v4-qw7B-UNAMGS) as a base. ### Models Merged The following models were included in the merge: * [Etherll/Qwen2.5-7B-della-test](https://huggingface.co./Etherll/Qwen2.5-7B-della-test) * [jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0](https://huggingface.co./jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0) + [bunnycore/Qwen-2.1-7b-Persona-lora_model](https://huggingface.co./bunnycore/Qwen-2.1-7b-Persona-lora_model) * [bunnycore/Qwen-2.5-7B-Deep-Stock-v2](https://huggingface.co./bunnycore/Qwen-2.5-7B-Deep-Stock-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0+bunnycore/Qwen-2.1-7b-Persona-lora_model - model: fblgit/cybertron-v4-qw7B-UNAMGS - model: bunnycore/Qwen-2.5-7B-Deep-Stock-v2 - model: Etherll/Qwen2.5-7B-della-test merge_method: model_stock base_model: fblgit/cybertron-v4-qw7B-UNAMGS normalize: false int8_mask: true dtype: bfloat16 ```