rombomergkitresult

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using Rombo-Org/Rombo-LLM-V3.0-Qwen-32b as a base.

Models Merged

The following models were included in the merge:

  • ./merged-qwen

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ./merged-qwen
    parameters:
      weight: 1
      density: 1
merge_method: ties
base_model: Rombo-Org/Rombo-LLM-V3.0-Qwen-32b
parameters:
  weight: 1
  density: 1
  normalize: true
  int8_mask: false
dtype: bfloat16
tokenizer_source: ./merged-qwen
# tokenizer:
#   source: union
#   tokens:
#     <think>:
#       source: ./qwen2.5-coder-7b-lora-checkpoints/final
#       force: true
#     </think>:
#       source: ./qwen2.5-coder-7b-lora-checkpoints/final
#       force: true
#     <fim_pad>:
#       source: ./qwen2.5-coder-7b-lora-checkpoints/final
Downloads last month
10
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for valoomba/rombo-reasoning-mergekit

Base model

Qwen/Qwen2.5-32B
Finetuned
(3)
this model
Quantizations
1 model