--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - bunnycore/Best-Mix-Llama-3.1-8B - USTC-KnowledgeComputingLab/Llama3-KALE-LM-Chem-1.5-8B - Weyaxi/Einstein-v6.1-Llama3-8B --- # ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [bunnycore/Best-Mix-Llama-3.1-8B](https://huggingface.co./bunnycore/Best-Mix-Llama-3.1-8B) * [USTC-KnowledgeComputingLab/Llama3-KALE-LM-Chem-1.5-8B](https://huggingface.co./USTC-KnowledgeComputingLab/Llama3-KALE-LM-Chem-1.5-8B) * [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co./Weyaxi/Einstein-v6.1-Llama3-8B) ## 🧩 Configuration ```yaml models: - model: bunnycore/Best-Mix-Llama-3.1-8B parameters: density: [1, 0.7, 0.5] # density gradient, focusing on overall balance weight: 1.0 - model: USTC-KnowledgeComputingLab/Llama3-KALE-LM-Chem-1.5-8B parameters: density: 0.6 # chemistry focus with significant contribution weight: [0.3, 0.7, 1.0] # weight gradient, increasing focus on chemistry tasks - model: Weyaxi/Einstein-v6.1-Llama3-8B parameters: density: 0.4 # focus on conversational adaptability and long-form generation weight: - filter: mlp value: 0.5 # stronger contribution to MLP tasks - filter: self_attn value: 0.7 # slightly higher weight on attention mechanisms - value: 0.5 # balanced contribution for general tasks merge_method: ties base_model: bunnycore/Best-Mix-Llama-3.1-8B parameters: normalize: true int8_mask: true dtype: float16 ```