OpenCrystal-8B-L3
This is a far more optimized version of Darkknight535/OpenCrystal-12B-L3 which uses the original 8B models (instead of the upscaled 12B models).
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using unsloth/llama-3-8b-Instruct as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: unsloth/llama-3-8b-Instruct
- model: nothingiisreal/L3-8B-Celeste-V1.2 # Another RP / Storytelling Model
parameters:
density: 0.1
weight: 0.1
- model: Sao10K/L3-8B-Niitama-v1 # Another RP / Storytelling Model
parameters:
density: 0.9
weight: 0.9
merge_method: ties
base_model: unsloth/llama-3-8b-Instruct # For Base Coherence and prompting
parameters:
int8_mask: true
rescale: true
normalize: false
dtype: bfloat16
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for mpasila/OpenCrystal-8B-L3
Merge model
this model