Edit model card

This model was converted to GGUF format from v000000/Psyonic-Rose-20B-Higher-Quality using llama.cpp Refer to the original model card for more details on the model

Psyonic-Rose 20B Q4_K_M GGUF

Speculative recreation of jebcarter Psyonic-Rose-20B (Llama2)

image/png

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32
    parameters:
      weight: 1.0
  - model: tavtav/Rose-20B(fp16)
    parameters:
      weight: 0.05
merge_method: linear
dtype: float32
  • credits jebcarter
  • credits DavidAU
  • credtis tavtav
  • credits NeverSleep
  • credits CalderaAI

{{{Alpaca instruct format}}}

Downloads last month
8
GGUF
Model size
20B params
Architecture
llama

4-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for v000000/Psyonic-Rose-20B-Higher-Quality-Q4_K_M-GGUF

Quantized
(3)
this model