Merge

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the Passthrough merge method using nbeerbower/Llama3.1-Allades-8B + mpasila/Llama-3.1-Literotica-LoRA-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: nbeerbower/Llama3.1-Allades-8B+mpasila/Llama-3.1-Literotica-LoRA-8B
dtype: bfloat16
merge_method: passthrough
models:
  - model: nbeerbower/Llama3.1-Allades-8B+mpasila/Llama-3.1-Literotica-LoRA-8B
tokenizer_source: unsloth/Meta-Llama-3.1-8B

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 11.88
IFEval (0-Shot) 24.61
BBH (3-Shot) 17.45
MATH Lvl 5 (4-Shot) 0.23
GPQA (0-shot) 4.59
MuSR (0-shot) 5.22
MMLU-PRO (5-shot) 19.16
Downloads last month
18
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Triangle104/Llama3.1-Allades-Lit-8b

Collections including Triangle104/Llama3.1-Allades-Lit-8b

Evaluation results