--- base_model: - Trelis/Llama-3.2-1B-Instruct-MATH-synthetic - prithivMLmods/Bellatrix-Tiny-1B-R1 - unsloth/Llama-3.2-1B-Instruct - CarrotAI/Llama-3.2-Rabbit-Ko-1B-Instruct - huihui-ai/MicroThinker-1B-Preview - passing2961/Thanos-1B - prithivMLmods/Llama-Express.1-Math library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co./unsloth/Llama-3.2-1B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [Trelis/Llama-3.2-1B-Instruct-MATH-synthetic](https://huggingface.co./Trelis/Llama-3.2-1B-Instruct-MATH-synthetic) * [prithivMLmods/Bellatrix-Tiny-1B-R1](https://huggingface.co./prithivMLmods/Bellatrix-Tiny-1B-R1) * [CarrotAI/Llama-3.2-Rabbit-Ko-1B-Instruct](https://huggingface.co./CarrotAI/Llama-3.2-Rabbit-Ko-1B-Instruct) * [huihui-ai/MicroThinker-1B-Preview](https://huggingface.co./huihui-ai/MicroThinker-1B-Preview) * [passing2961/Thanos-1B](https://huggingface.co./passing2961/Thanos-1B) * [prithivMLmods/Llama-Express.1-Math](https://huggingface.co./prithivMLmods/Llama-Express.1-Math) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: unsloth/Llama-3.2-1B-Instruct - model: Trelis/Llama-3.2-1B-Instruct-MATH-synthetic - model: prithivMLmods/Llama-Express.1-Math - model: passing2961/Thanos-1B - model: CarrotAI/Llama-3.2-Rabbit-Ko-1B-Instruct - model: huihui-ai/MicroThinker-1B-Preview - model: prithivMLmods/Bellatrix-Tiny-1B-R1 base_model: unsloth/Llama-3.2-1B-Instruct merge_method: model_stock dtype: bfloat16 ```