L3-Deluxe-Scrambled-Eggs-On-Toast-8B
L3-Deluxe-Scrambled-Eggs-On-Toast-8B is a role-play model merger using 36 models that was made in 23 merging steps.
The goal is to create both a creative and smart model by using gradients. Each model has their own section in the gradient where they have a larger weight to promote intelligence whereas the rest of the models in the section of the gradient have a small weight to promote creativity.
The following models were used as inspiration:
- grimjim/kunoichi-lemon-royale-v3-32K-7B
- invisietch/EtherealRainbow-v0.3-8B
- PJMixers/LLaMa-3-CursedStock-v2.0-8B
Instruct Format
Llama 3
Settings/Presets
Instruct/Context
Virt-io's SillyTavern Presets is recommended.
Sampler Settings
Here are the current recommended settings for more creativity.
Top K: 60
Min P: 0.035
Rep Pen: 1.05
Rep Pen Range: 2048
Pres Pen: 0.15
Smoothing Factor: 0.25
Dyna Temp:
Min Temp: 0.75
Max Temp: 1.5
Expo: 0.85
Here are the current recommended settings for more adherencey. Created by Dunjeon!
temperature: 0.7
top_p: 0.75
top_k: 30
min_p: 0.02
rep_pen: 1.1
rep_pen_range: 2048
presence_pen: 0.03
Smooth Sampling:
smoothing_factor: 0.25
DRY:
dry_allowed_length: 2
dry_multiplier: 0.8
dry_base: 1.75
dry_penalty_last_n: 4096
DynaTemp:
dynatemp: false #found it better without, but you can flip it on and see how it goes
min_temp: 0.7
max_temp: 0.9
dynatemp_exponent: 0.85
Quants
Weighted quants by:
Static quants by:
Secret Sauce
Models Used
L3-Scrambled-Eggs-On-Toast-8B is a merge of the following models using LazyMergekit:
- Sao10K/L3-8B-Stheno-v3.2
- ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
- Nitral-AI/Hathor_Stable-v0.2-L3-8B
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- openlynn/Llama-3-Soliloquy-8B-v2
- NousResearch/Meta-Llama-3-8B-Instruct
- turboderp/llama3-turbcat-instruct-8b
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- TIGER-Lab/MAmmoTH2-8B-Plus
- jondurbin/bagel-8b-v1.0
- abacusai/Llama-3-Smaug-8B
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
- lodrick-the-lafted/Limon-8B
- vicgalle/Configurable-Llama-3-8B-v0.3
- Undi95/Llama3-Unholy-8B-OAS
- Undi95/Unholy-8B-DPO-OAS
- WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
- migtissera/Tess-2.0-Llama-3-8B
- defog/llama-3-sqlcoder-8b
- HPAI-BSC/Llama3-Aloe-8B-Alpha
- maldv/llama-3-fantasy-writer-8b
- lodrick-the-lafted/Olethros-8B
- Magpie-Align/Llama-3-8B-ShareGPT-112K
- Magpie-Align/Llama-3-8B-WildChat
- Magpie-Align/Llama-3-8B-Tulu-330K
- Magpie-Align/Llama-3-8B-OpenHermes-243K
- Magpie-Align/Llama-3-8B-WizardLM-196K
- Magpie-Align/Llama-3-8B-Ultrachat-200K
- refuelai/Llama-3-Refueled
- Danielbrdz/Barcenas-Llama3-8b-ORPO
- migtissera/Llama-3-8B-Synthia-v3.5
- chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
- chujiezheng/LLaMA3-iterative-DPO-final-ExPO
- chargoddard/prometheus-2-llama-3-8b
YAML Configs Used
The following YAML configs were used to make this mode
Eggs-and-Bread-RP-pt.1
models:
- model: Sao10K/L3-8B-Stheno-v3.2
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: openlynn/Llama-3-Soliloquy-8B-v2
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-RP-pt.2
models:
- model: Sao10K/L3-8B-Stheno-v3.2
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: openlynn/Llama-3-Soliloquy-8B-v2
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Egg-and-Bread-RP
models:
- model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-RP-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Eggs-and-Bread-IQ-pt.1
models:
- model: NousResearch/Meta-Llama-3-8B-Instruct
- model: turboderp/llama3-turbcat-instruct-8b
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: TIGER-Lab/MAmmoTH2-8B-Plus
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: jondurbin/bagel-8b-v1.0
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: abacusai/Llama-3-Smaug-8B
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-IQ-pt.2
models:
- model: NousResearch/Meta-Llama-3-8B-Instruct
- model: turboderp/llama3-turbcat-instruct-8b
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: TIGER-Lab/MAmmoTH2-8B-Plus
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: jondurbin/bagel-8b-v1.0
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: abacusai/Llama-3-Smaug-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-IQ
models:
- model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Eggs-and-Bread-Uncen-pt.1
models:
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: lodrick-the-lafted/Limon-8B
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: vicgalle/Configurable-Llama-3-8B-v0.3
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Undi95/Llama3-Unholy-8B-OAS
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Undi95/Unholy-8B-DPO-OAS
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Uncen-pt.2
models:
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: lodrick-the-lafted/Limon-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: vicgalle/Configurable-Llama-3-8B-v0.3
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Undi95/Llama3-Unholy-8B-OAS
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Undi95/Unholy-8B-DPO-OAS
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Uncen
models:
- model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Scrambled-Eggs-On-Toast-1
models:
- model: Casual-Autopsy/Eggs-and-Bread-RP
- model: Casual-Autopsy/Eggs-and-Bread-Uncen
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-RP
parameters:
t:
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
dtype: bfloat16
L3-Scrambled-Eggs-On-Toast-8B
models:
- model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
- model: Casual-Autopsy/Eggs-and-Bread-IQ
merge_method: slerp
base_model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
parameters:
t:
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
dtype: bfloat16
Eggs-and-Bread-Misc1-pt.1
models:
- model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
- model: migtissera/Tess-2.0-Llama-3-8B
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: defog/llama-3-sqlcoder-8b
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: HPAI-BSC/Llama3-Aloe-8B-Alpha
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: maldv/llama-3-fantasy-writer-8b
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: lodrick-the-lafted/Olethros-8B
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Misc1-pt.2
models:
- model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
- model: migtissera/Tess-2.0-Llama-3-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: defog/llama-3-sqlcoder-8b
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: HPAI-BSC/Llama3-Aloe-8B-Alpha
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: maldv/llama-3-fantasy-writer-8b
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: lodrick-the-lafted/Olethros-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Misc1
models:
- model: Casual-Autopsy/Eggs-and-Bread-Misc1-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-Misc1-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Misc1-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Eggs-and-Bread-FFT-pt.1
models:
- model: Magpie-Align/Llama-3-8B-ShareGPT-112K
- model: Magpie-Align/Llama-3-8B-WildChat
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-Tulu-330K
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-OpenHermes-243K
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-WizardLM-196K
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Magpie-Align/Llama-3-8B-Ultrachat-200K
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Magpie-Align/Llama-3-8B-ShareGPT-112K
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-FFT-pt.2
models:
- model: Magpie-Align/Llama-3-8B-ShareGPT-112K
- model: Magpie-Align/Llama-3-8B-WildChat
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: Magpie-Align/Llama-3-8B-Tulu-330K
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Magpie-Align/Llama-3-8B-OpenHermes-243K
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-WizardLM-196K
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-Ultrachat-200K
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Magpie-Align/Llama-3-8B-ShareGPT-112K
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-FFT
models:
- model: Casual-Autopsy/Eggs-and-Bread-FFT-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-FFT-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-FFT-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Eggs-and-Bread-Misc2-pt.1
models:
- model: refuelai/Llama-3-Refueled
- model: Danielbrdz/Barcenas-Llama3-8b-ORPO
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: migtissera/Llama-3-8B-Synthia-v3.5
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: chujiezheng/LLaMA3-iterative-DPO-final-ExPO
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: chargoddard/prometheus-2-llama-3-8b
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: refuelai/Llama-3-Refueled
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Misc2-pt.2
models:
- model: refuelai/Llama-3-Refueled
- model: Danielbrdz/Barcenas-Llama3-8b-ORPO
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: migtissera/Llama-3-8B-Synthia-v3.5
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: chujiezheng/LLaMA3-iterative-DPO-final-ExPO
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: chargoddard/prometheus-2-llama-3-8b
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: refuelai/Llama-3-Refueled
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Misc2
models:
- model: Casual-Autopsy/Eggs-and-Bread-Misc2-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-Misc2-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Misc2-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Scrambled-Eggs-On-Toast-2
models:
- model: Casual-Autopsy/Eggs-and-Bread-Misc1
- model: Casual-Autopsy/Eggs-and-Bread-Misc2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Misc1
parameters:
t:
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
dtype: bfloat16
Scrambled-Eggs-On-Toast-3
models:
- model: Casual-Autopsy/Scrambled-Eggs-On-Toast-2
- model: Casual-Autopsy/Eggs-and-Bread-FFT
merge_method: slerp
base_model: Casual-Autopsy/Scrambled-Eggs-On-Toast-2
parameters:
t:
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
dtype: bfloat16
L3-Deluxe-Scrambled-Eggs-On-Toast-8B
models:
- model: Casual-Autopsy/L3-Scrambled-Eggs-On-Toast-8B
- model: Casual-Autopsy/Scrambled-Eggs-On-Toast-3
merge_method: slerp
base_model: Casual-Autopsy/L3-Scrambled-Eggs-On-Toast-8B
parameters:
t:
- value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2]
dtype: bfloat16
- Downloads last month
- 21