trollek's picture
Update README.md
751f8d2 verified
metadata
base_model:
  - cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
  - trollek/Qwen2-1.5B-Instruct-Abliterated
  - M4-ai/Hercules-5.0-Qwen2-1.5B
  - Replete-AI/Replete-Coder-Qwen2-1.5b
tags:
  - mergekit
  - merge
license: apache-2.0
language:
  - en

CleverQwen2-1.5B-GGUF

The repo contains GGUF quants for CleverQwen2-1.5B.

This is a merge of pre-trained language models created using mergekit.

It has grown by about 300M parameters and I don't know why. I would like to know though. It works as expexted - amazing - I just can't see any reason for the Qwen2 models to gain parameters when merged.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using trollek/Qwen2-1.5B-Instruct-Abliterated as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Replete-AI/Replete-Coder-Qwen2-1.5b
  - model: M4-ai/Hercules-5.0-Qwen2-1.5B
  - model: cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
merge_method: model_stock
base_model: trollek/Qwen2-1.5B-Instruct-Abliterated
architecture: qwen2
dtype: bfloat16

Quants

Ollama

ollama pull trollek/cleverqwen2:1.5b-q4_k_s
ollama pull trollek/cleverqwen2:1.5b-q5_k_s
ollama pull trollek/cleverqwen2:1.5b-q6_k