L3.3-70B-Lycosa-v0.2
This is a merge of pre-trained language models created using mergekit.
Merge Details
changes from v0.1:
- Dropped llama-3.3-70b-instruct as a pivot to further reduce positive bias. No noticeable impact on reasoning.
- Added DeepSeek-R1-Distill-Llama-70B as a target model for improved reasoning.
An RP merge with a focus on:
- model intelligence
- removing positive bias
- creativity
This model was merged using the sce merge method using deepseek-ai/DeepSeek-R1-Distill-Llama-70B as a base.
The included DeepSeek-R1-Distill-Llama-70B chat template is recommended.
<|begin▁of▁sentence|>system prompt here<|User|>user 1st message here<|Assistant|>assistant 1st response here<|end▁of▁sentence|><|User|>user 2nd message here<|Assistant|>
The llama3 chat template is no longer recommended due to the increased Deepseek-R1 influence in this v0.2 merge.
Models Merged
The following models were included in the merge:
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
- Sao10K/70B-L3.3-Cirrus-x1
- TheDrummer/Nautilus-70B-v0.1
- Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
- SicariusSicariiStuff/Negative_LLAMA_70B
Configuration
The following YAML configuration was used to produce this model:
models:
# Pivot model
- model: SicariusSicariiStuff/Negative_LLAMA_70B
# Target models
- model: Sao10K/70B-L3.3-Cirrus-x1
- model: TheDrummer/Nautilus-70B-v0.1
- model: Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
merge_method: sce
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
parameters:
select_topk: 1.0
dtype: bfloat16
- Downloads last month
- 44
Model tree for divinetaco/L3.3-70B-Lycosa-v0.2
Base model
deepseek-ai/DeepSeek-R1-Distill-Llama-70B