Llama3.1-LexiHermes-SuperStorm
A mix of four high-performing models using the new SCE merging technique. This model was created using mergekit.
Models Merged
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- NousResearch/Hermes-3-Llama-3.1-8B
- arcee-ai/Llama-3.1-SuperNova-Lite
- akjindal53244/Llama-3.1-Storm-8B
Base model: mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
Features
- Solid and consistent style even without finetuning
- Abliterated to avoid outright request refusals
- Suitable for roleplaying (possibly due to Hermes component)
- Can replace Llama 3.1 8B Instruct for general tasks
Limitations
- Function calling and other languages not tested
- May struggle with math and logic like other language models
- Potential for factual errors, especially in specialized fields
- Not intended for public deployment without additional safeguards
Merge Configuration
The following YAML configuration was used:
models:
- model: Hermes-3-Llama-3.1-8B
- model: Llama-3.1-8B-Lexi-Uncensored-V2
- model: Llama-3.1-SuperNova-Lite
- model: Llama-3.1-Storm-8B
merge_method: sce
base_model: Meta-Llama-3.1-8B-Instruct-abliterated
parameters:
select_topk: 1.5
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 29.43 |
IFEval (0-Shot) | 78.35 |
BBH (3-Shot) | 32.55 |
MATH Lvl 5 (4-Shot) | 16.16 |
GPQA (0-shot) | 9.73 |
MuSR (0-shot) | 8.20 |
MMLU-PRO (5-shot) | 31.60 |
- Downloads last month
- 50
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for agentlans/Llama3.1-LexiHermes-SuperStorm
Merge model
this model
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard78.350
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard32.550
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard16.160
- acc_norm on GPQA (0-shot)Open LLM Leaderboard9.730
- acc_norm on MuSR (0-shot)Open LLM Leaderboard8.200
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard31.600