Prikol

I don't even know anymore

Меня нужно изолировать от общества

Overview

RP-model with good dialogue flow and some creative input.

Tried to make this thing have less slop than the previous iteration. Didn't work out too well, but the NSFW parts are a little more elaborate than before. So yeah, it's an improvement.

Prompt format: Llama3 OR Llama3 Context and ChatML Instruct

Settings: This kinda works but I'm weird

Quants

Static | Imatrix

Merge Details

dtype: bfloat16
tokenizer_source: base
merge_method: nuslerp
parameters:
  nuslerp_row_wise: true
models:
  - model: SicariusSicariiStuff/Negative_LLAMA_70B
    parameters:
      weight:
        - filter: v_proj
          value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
        - filter: o_proj
          value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1]
        - filter: up_proj
          value: [1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1]
        - filter: gate_proj
          value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
        - filter: down_proj
          value: [0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0]
        - value: [0.2, 0.35, 0.4, 0.35, 0.2]
  - model: Nohobby/L3.3-Prikol-70B-v0.2
    parameters:
      weight:
        - filter: v_proj
          value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
        - filter: o_proj
          value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]
        - filter: up_proj
          value: [0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0]
        - filter: gate_proj
          value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
        - filter: down_proj
          value: [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1]
        - value: [0.8, 0.65, 0.6, 0.65, 0.8]
Downloads last month
12
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Nohobby/L3.3-Prikol-70B-v0.3

Collection including Nohobby/L3.3-Prikol-70B-v0.3