--- language: - en - code license: apache-2.0 tags: - merge - computer science datasets: - open-phi/programming_books_llama - open-phi/textbooks inference: parameters: do_sample: true temperature: 0.2 top_p: 0.14 top_k: 12 max_new_tokens: 250 repetition_penalty: 1.15 widget: - text: 'To calculate the factorial of n, we can use the following function:' model-index: - name: TinyMistral-248M-v2.5 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 24.57 name: normalized accuracy source: url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 27.49 name: normalized accuracy source: url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 23.15 name: accuracy source: url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 46.72 source: url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 47.83 name: accuracy source: url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 13.36 name: strict accuracy source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 3.18 name: normalized accuracy source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 0.0 name: exact match source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 0.11 name: acc_norm source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 5.07 name: acc_norm source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 1.5 name: accuracy source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5 name: Open LLM Leaderboard --- # TinyMistral-248M-v2.5 This model was created by merging TinyMistral-248M-v1 and v2, then further pretraining on synthetic textbooks. The resulting model's performance is superior to both, after personal evaluation. During training, this model reached an average perplexity score of 4, outperforming V1 by nearly 7x, and V2 by 4x. You can use the following config to reproduce the merged model: ``` base_model: Locutusque/TinyMistral-248M-v2 dtype: float16 merge_method: ties parameters: int8_mask: 1.0 normalize: 1.0 slices: - sources: - layer_range: [0, 12] model: Locutusque/TinyMistral-248M parameters: density: [1.0, 0.7, 0.1] weight: 1.0 - layer_range: [0, 12] model: Locutusque/TinyMistral-248M-v2 parameters: density: 0.5 weight: [0.0, 0.3, 0.7, 1.0] ``` This model can also answer basic questions, without needing to do any fine-tuning. This model was also created as an attempt to fix the issue with V2 - the weights were prone to exploding gradients, making it difficult to fine-tune. This model is easier to fine-tune. To get the best out of this model, I recommend installing it, and trying it out yourself, as the model's performance seems to degrade in the inference API. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_Locutusque__TinyMistral-248M-v2.5) | Metric |Value| |---------------------------------|----:| |Avg. |28.29| |AI2 Reasoning Challenge (25-Shot)|24.57| |HellaSwag (10-Shot) |27.49| |MMLU (5-Shot) |23.15| |TruthfulQA (0-shot) |46.72| |Winogrande (5-shot) |47.83| |GSM8k (5-shot) | 0.00| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_Locutusque__TinyMistral-248M-v2.5) | Metric |Value| |-------------------|----:| |Avg. | 3.87| |IFEval (0-Shot) |13.36| |BBH (3-Shot) | 3.18| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) | 0.11| |MuSR (0-shot) | 5.07| |MMLU-PRO (5-shot) | 1.50|