license: other
license_name: qwen
license_link: https://huggingface.co./Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-72B-Instruct
model-index:
- name: Qwen2.5-95B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 84.31
name: strict accuracy
source:
url: >-
https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 58.53
name: normalized accuracy
source:
url: >-
https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.04
name: exact match
source:
url: >-
https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.21
name: acc_norm
source:
url: >-
https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.61
name: acc_norm
source:
url: >-
https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 46.85
name: accuracy
source:
url: >-
https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
name: Open LLM Leaderboard
tags:
- chat
Qwen2.5-95B-Instruct
Qwen2.5-95B-Instruct is a Qwen/Qwen2.5-72B-Instruct self-merge made with MergeKit.
The layer ranges chosen for this merge were inspired by a rough estimate of the layer similarity analysis of ssmits/Falcon2-5.5B-multilingual. Layer similarity analysis involves examining the outputs of different layers in a neural network to determine how similar or different they are. This technique can help identify which layers contribute most significantly to the model's performance. In the context of the Falcon-11B model, layer similarity analysis across multiple languages revealed that the first half of the layers were more important for maintaining performance. Additionally, this analysis can be used to more rigidly slice and add extra layers for optimal Next Token Prediction, allowing for possibly a model architecture that's more creative and powerful.
Special thanks to Eric Hartford for both inspiring and evaluating the original model, to Charles Goddard for creating MergeKit, and to Mathieu Labonne for creating the Meta-Llama-3-120B-Instruct model that served as the main inspiration for this merge.
π Applications
This model is probably good for creative writing tasks. It uses the Qwen chat template with a default context window of 128K.
The model could be quite creative and maybe even better than the 72B model at some tasks.
β‘ Quantized models
To be quantized.
- GGUF: [Link to GGUF model]
- EXL2: [Link to EXL2 model]
- mlx: [Link to mlx model]
π Evaluation
This model has yet to be thoroughly evaluated. It is expected to excel in creative writing and more but may have limitations in other tasks. Use it with caution and don't expect it to outperform state-of-the-art models outside of specific creative use cases.
Once the model is created and tested, this section will be updated with:
- Links to evaluation threads on social media platforms
- Examples of the model's performance in creative writing tasks
- Comparisons with other large language models in various applications
- Community feedback and use cases
We encourage users to share their experiences and evaluations to help build a comprehensive understanding of the model's capabilities and limitations.
𧩠Configuration
slices:
- sources:
- layer_range: [0, 10]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [5, 15]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [10, 20]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [15, 25]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [20, 30]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [25, 80]
model: Qwen/Qwen2.5-72B-Instruct
dtype: bfloat16
merge_method: passthrough
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ssmits/Qwen2.5-95B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
π Evaluation
Initial benchmarks show interesting performance characteristics compared to the 72B model:
Strengths
The 95B model shows notable improvements in:
- Mathematical Reasoning
- Up to 5.83x improvement in algebra tasks
- 3.33x improvement in pre-algebra
- Consistent gains across geometry, number theory, and probability tasks
- Overall stronger performance in complex mathematical reasoning
- Spatial & Object Understanding
- 11% improvement in object placement tasks
- 7% better at tabular data interpretation
- Enhanced performance in logical deduction with multiple objects
- Complex Language Tasks
- 4% improvement in disambiguation tasks
- 2% better at movie recommendations
- Slight improvements in hyperbaton (complex word order) tasks
- Creative & Analytical Reasoning
- 10% improvement in murder mystery solving
- Better performance in tasks requiring creative problem-solving
Areas for Consideration
While the model shows improvements in specific areas, users should note that the 72B model still performs better in many general language and reasoning tasks. The 95B version appears to excel particularly in mathematical and spatial reasoning while maintaining comparable performance in other areas.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 37.43 |
IFEval (0-Shot) | 84.31 |
BBH (3-Shot) | 58.53 |
MATH Lvl 5 (4-Shot) | 6.04 |
GPQA (0-shot) | 15.21 |
MuSR (0-shot) | 13.61 |
MMLU-PRO (5-shot) | 46.85 |
Key | 72b Result | 95b Result | Difference | Which is Higher | Multiplier |
---|---|---|---|---|---|
leaderboard_musr.acc_norm,none | 0.419 | 0.427 | 0.008 | 95b | 1.02 |
leaderboard_bbh_sports_understanding.acc_norm,none | 0.892 | 0.876 | -0.016 | 72b | 0.98 |
leaderboard_bbh_logical_deduction_three_objects.acc_norm,none | 0.94 | 0.928 | -0.012 | 72b | 0.99 |
leaderboard_math_geometry_hard.exact_match,none | 0 | 0.008 | 0.008 | 95b | 0.00 |
leaderboard_gpqa.acc_norm,none | 0.375 | 0.364 | -0.011 | 72b | 0.97 |
leaderboard_math_hard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
leaderboard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
leaderboard.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
leaderboard.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
leaderboard.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
leaderboard.acc_norm,none | 0.641 | 0.622 | -0.020 | 72b | 0.97 |
leaderboard.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
leaderboard.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
leaderboard_bbh_causal_judgement.acc_norm,none | 0.668 | 0.663 | -0.005 | 72b | 0.99 |
leaderboard_bbh_salient_translation_error_detection.acc_norm,none | 0.668 | 0.588 | -0.080 | 72b | 0.88 |
leaderboard_gpqa_extended.acc_norm,none | 0.372 | 0.364 | -0.007 | 72b | 0.98 |
leaderboard_math_prealgebra_hard.exact_match,none | 0.047 | 0.155 | 0.109 | 95b | 3.33 |
leaderboard_math_algebra_hard.exact_match,none | 0.02 | 0.114 | 0.094 | 95b | 5.83 |
leaderboard_bbh_boolean_expressions.acc_norm,none | 0.936 | 0.92 | -0.016 | 72b | 0.98 |
leaderboard_math_num_theory_hard.exact_match,none | 0 | 0.058 | 0.058 | 95b | 0.00 |
leaderboard_bbh_movie_recommendation.acc_norm,none | 0.768 | 0.78 | 0.012 | 95b | 1.02 |
leaderboard_math_counting_and_prob_hard.exact_match,none | 0 | 0.024 | 0.024 | 95b | 0.00 |
leaderboard_math_intermediate_algebra_hard.exact_match,none | 0 | 0.004 | 0.004 | 95b | 0.00 |
leaderboard_ifeval.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
leaderboard_ifeval.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
leaderboard_ifeval.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
leaderboard_ifeval.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
leaderboard_bbh_snarks.acc_norm,none | 0.927 | 0.904 | -0.022 | 72b | 0.98 |
leaderboard_bbh_web_of_lies.acc_norm,none | 0.676 | 0.616 | -0.060 | 72b | 0.91 |
leaderboard_bbh_penguins_in_a_table.acc_norm,none | 0.719 | 0.767 | 0.048 | 95b | 1.07 |
leaderboard_bbh_hyperbaton.acc_norm,none | 0.892 | 0.9 | 0.008 | 95b | 1.01 |
leaderboard_bbh_object_counting.acc_norm,none | 0.612 | 0.544 | -0.068 | 72b | 0.89 |
leaderboard_musr_object_placements.acc_norm,none | 0.258 | 0.285 | 0.027 | 95b | 1.11 |
leaderboard_bbh_logical_deduction_five_objects.acc_norm,none | 0.704 | 0.592 | -0.112 | 72b | 0.84 |
leaderboard_musr_team_allocation.acc_norm,none | 0.456 | 0.396 | -0.060 | 72b | 0.87 |
leaderboard_bbh_navigate.acc_norm,none | 0.832 | 0.788 | -0.044 | 72b | 0.95 |
leaderboard_bbh_tracking_shuffled_objects_seven_objects.acc_norm,none | 0.34 | 0.304 | -0.036 | 72b | 0.89 |
leaderboard_bbh_formal_fallacies.acc_norm,none | 0.776 | 0.756 | -0.020 | 72b | 0.97 |
all.leaderboard_musr.acc_norm,none | 0.419 | 0.427 | 0.008 | 95b | 1.02 |
all.leaderboard_bbh_sports_understanding.acc_norm,none | 0.892 | 0.876 | -0.016 | 72b | 0.98 |
all.leaderboard_bbh_logical_deduction_three_objects.acc_norm,none | 0.94 | 0.928 | -0.012 | 72b | 0.99 |
all.leaderboard_math_geometry_hard.exact_match,none | 0 | 0.008 | 0.008 | 95b | 0.00 |
all.leaderboard_gpqa.acc_norm,none | 0.375 | 0.364 | -0.011 | 72b | 0.97 |
all.leaderboard_math_hard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
all.leaderboard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
all.leaderboard.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
all.leaderboard.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
all.leaderboard.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
all.leaderboard.acc_norm,none | 0.641 | 0.622 | -0.020 | 72b | 0.97 |
all.leaderboard.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
all.leaderboard.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
all.leaderboard_bbh_causal_judgement.acc_norm,none | 0.668 | 0.663 | -0.005 | 72b | 0.99 |
all.leaderboard_bbh_salient_translation_error_detection.acc_norm,none | 0.668 | 0.588 | -0.080 | 72b | 0.88 |
all.leaderboard_gpqa_extended.acc_norm,none | 0.372 | 0.364 | -0.007 | 72b | 0.98 |
all.leaderboard_math_prealgebra_hard.exact_match,none | 0.047 | 0.155 | 0.109 | 95b | 3.33 |
all.leaderboard_math_algebra_hard.exact_match,none | 0.02 | 0.114 | 0.094 | 95b | 5.83 |
all.leaderboard_bbh_boolean_expressions.acc_norm,none | 0.936 | 0.92 | -0.016 | 72b | 0.98 |
all.leaderboard_math_num_theory_hard.exact_match,none | 0 | 0.058 | 0.058 | 95b | 0.00 |
all.leaderboard_bbh_movie_recommendation.acc_norm,none | 0.768 | 0.78 | 0.012 | 95b | 1.02 |
all.leaderboard_math_counting_and_prob_hard.exact_match,none | 0 | 0.024 | 0.024 | 95b | 0.00 |
all.leaderboard_math_intermediate_algebra_hard.exact_match,none | 0 | 0.004 | 0.004 | 95b | 0.00 |
all.leaderboard_ifeval.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
all.leaderboard_ifeval.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
all.leaderboard_ifeval.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
all.leaderboard_ifeval.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
all.leaderboard_bbh_snarks.acc_norm,none | 0.927 | 0.904 | -0.022 | 72b | 0.98 |
all.leaderboard_bbh_web_of_lies.acc_norm,none | 0.676 | 0.616 | -0.060 | 72b | 0.91 |
all.leaderboard_bbh_penguins_in_a_table.acc_norm,none | 0.719 | 0.767 | 0.048 | 95b | 1.07 |
all.leaderboard_bbh_hyperbaton.acc_norm,none | 0.892 | 0.9 | 0.008 | 95b | 1.01 |
all.leaderboard_bbh_object_counting.acc_norm,none | 0.612 | 0.544 | -0.068 | 72b | 0.89 |
all.leaderboard_musr_object_placements.acc_norm,none | 0.258 | 0.285 | 0.027 | 95b | 1.11 |
all.leaderboard_bbh_logical_deduction_five_objects.acc_norm,none | 0.704 | 0.592 | -0.112 | 72b | 0.84 |
all.leaderboard_musr_team_allocation.acc_norm,none | 0.456 | 0.396 | -0.060 | 72b | 0.87 |
all.leaderboard_bbh_navigate.acc_norm,none | 0.832 | 0.788 | -0.044 | 72b | 0.95 |
all.leaderboard_bbh_tracking_shuffled_objects_seven_objects.acc_norm,none | 0.34 | 0.304 | -0.036 | 72b | 0.89 |
all.leaderboard_bbh_formal_fallacies.acc_norm,none | 0.776 | 0.756 | -0.020 | 72b | 0.97 |
all.leaderboard_gpqa_main.acc_norm,none | 0.375 | 0.355 | -0.020 | 72b | 0.95 |
all.leaderboard_bbh_disambiguation_qa.acc_norm,none | 0.744 | 0.772 | 0.028 | 95b | 1.04 |
all.leaderboard_bbh_tracking_shuffled_objects_five_objects.acc_norm,none | 0.32 | 0.284 | -0.036 | 72b | 0.89 |
all.leaderboard_bbh_date_understanding.acc_norm,none | 0.784 | 0.764 | -0.020 | 72b | 0.97 |
all.leaderboard_bbh_geometric_shapes.acc_norm,none | 0.464 | 0.412 | -0.052 | 72b | 0.89 |
all.leaderboard_bbh_reasoning_about_colored_objects.acc_norm,none | 0.864 | 0.84 | -0.024 | 72b | 0.97 |
all.leaderboard_musr_murder_mysteries.acc_norm,none | 0.548 | 0.604 | 0.056 | 95b | 1.10 |
all.leaderboard_bbh_ruin_names.acc_norm,none | 0.888 | 0.86 | -0.028 | 72b | 0.97 |
all.leaderboard_bbh_logical_deduction_seven_objects.acc_norm,none | 0.644 | 0.664 | 0.020 | 95b | 1.03 |
all.leaderboard_bbh.acc_norm,none | 0.726 | 0.701 | -0.025 | 72b | 0.97 |
all.leaderboard_bbh_temporal_sequences.acc_norm,none | 0.996 | 0.968 | -0.028 | 72b | 0.97 |
all.leaderboard_mmlu_pro.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
leaderboard_gpqa_main.acc_norm,none | 0.375 | 0.355 | -0.020 | 72b | 0.95 |
leaderboard_bbh_disambiguation_qa.acc_norm,none | 0.744 | 0.772 | 0.028 | 95b | 1.04 |
leaderboard_bbh_tracking_shuffled_objects_five_objects.acc_norm,none | 0.32 | 0.284 | -0.036 | 72b | 0.89 |
leaderboard_bbh_date_understanding.acc_norm,none | 0.784 | 0.764 | -0.020 | 72b | 0.97 |
leaderboard_bbh_geometric_shapes.acc_norm,none | 0.464 | 0.412 | -0.052 | 72b | 0.89 |
leaderboard_bbh_reasoning_about_colored_objects.acc_norm,none | 0.864 | 0.84 | -0.024 | 72b | 0.97 |
leaderboard_musr_murder_mysteries.acc_norm,none | 0.548 | 0.604 | 0.056 | 95b | 1.10 |
leaderboard_bbh_ruin_names.acc_norm,none | 0.888 | 0.86 | -0.028 | 72b | 0.97 |
leaderboard_bbh_logical_deduction_seven_objects.acc_norm,none | 0.644 | 0.664 | 0.020 | 95b | 1.03 |
leaderboard_bbh.acc_norm,none | 0.726 | 0.701 | -0.025 | 72b | 0.97 |
leaderboard_bbh_temporal_sequences.acc_norm,none | 0.996 | 0.968 | -0.028 | 72b | 0.97 |
leaderboard_mmlu_pro.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |