Quantum-Citrus-9B
This merge is another attempt at making and intelligent, refined and unaligned model.
Based on my tests so far, it has accomplished the goals, and I am continuing to experiment with my interactions with it.
It includes previous merges of Starling, Cerebrum, LemonadeRP, InfinityRP, and deep down has a base of layla v0.1, as I am not that happy with the result form using v0.2.
The model is intended for fictional storytelling and roleplaying and may not be intended for all audences.
Merge Details
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
- ABX-AI/Starfinite-Laymospice-v2-7B
- ABX-AI/Cerebral-Infinity-7B
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: ABX-AI/Cerebral-Infinity-7B
layer_range: [0, 20]
- sources:
- model: ABX-AI/Starfinite-Laymospice-v2-7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 66.74 |
AI2 Reasoning Challenge (25-Shot) | 65.19 |
HellaSwag (10-Shot) | 84.75 |
MMLU (5-Shot) | 64.58 |
TruthfulQA (0-shot) | 55.96 |
Winogrande (5-shot) | 79.40 |
GSM8k (5-shot) | 50.57 |
- Downloads last month
- 15
Model tree for ABX-AI/Quantum-Citrus-9B
Collection including ABX-AI/Quantum-Citrus-9B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard65.190
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard84.750
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.580
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard55.960
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard79.400
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard50.570