--- base_model: - Nohobby/L3.3-Prikol-70B-v0.2 - SicariusSicariiStuff/Negative_LLAMA_70B library_name: transformers tags: - mergekit - merge --- # Prikol03 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the NuSLERP merge method. ### Models Merged The following models were included in the merge: * [Nohobby/L3.3-Prikol-70B-v0.2](https://huggingface.co./Nohobby/L3.3-Prikol-70B-v0.2) * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co./SicariusSicariiStuff/Negative_LLAMA_70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 tokenizer_source: base merge_method: nuslerp parameters: nuslerp_row_wise: true models: - model: SicariusSicariiStuff/Negative_LLAMA_70B parameters: weight: - filter: v_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: o_proj value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1] - filter: up_proj value: [1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1] - filter: gate_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: down_proj value: [0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0] - value: [0.2, 0.35, 0.4, 0.35, 0.2] - model: Nohobby/L3.3-Prikol-70B-v0.2 parameters: weight: - filter: v_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: o_proj value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0] - filter: up_proj value: [0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0] - filter: gate_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: down_proj value: [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1] - value: [0.8, 0.65, 0.6, 0.65, 0.8] ```