File size: 1,609 Bytes
0053e07 0712123 0053e07 0712123 0053e07 0712123 0053e07 0712123 0053e07 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
base_model:
- Qwen/QwQ-32B
library_name: transformers
tags:
- mergekit
- merge
---
# QwQ-32B Kumo
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: slerp
base_model: Qwen/QwQ-32B
models:
- model: Qwen/QwQ-32B
- model: NovaSky-AI/Sky-T1-32B-Flash
parameters:
t: 0.4
dtype: bfloat16
name: merge_model_1
---
merge_method: breadcrumbs_ties
base_model: Qwen/QwQ-32B
tokenizer_source: Qwen/QwQ-32B
name: merge_model_2
models:
- model: Qwen/QwQ-32B
parameters:
weight: 1.0
- model: FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview
parameters:
weight: 0.75
dtype: bfloat16
---
merge_method: task_arithmetic
base_model: Qwen/Qwen2.5-32B
tokenizer_source: Qwen/QwQ-32B
name: merge_model_3
models:
- model: rinna/deepseek-r1-distill-qwen2.5-bakeneko-32b
parameters:
weight: 1.0
- model: cyberagent/DeepSeek-R1-Distill-Qwen-32B-Japanese
parameters:
weight: 0.9
tokenizer_source: base
dtype: bfloat16
---
merge_method: slerp
base_model: Qwen/QwQ-32B
models:
- model: Qwen/QwQ-32B
- model: TeamDelta/ABEJA-Qwen2.5-32B-base-jp-v0.1
parameters:
t: 0.5
tokenizer_source: base
dtype: bfloat16
name: merge_model_4
---
merge_method: model_stock
base_model: Qwen/QwQ-32B
models:
- model: Qwen/QwQ-32B
- model: merge_model_1
- model: merge_model_2
- model: merge_model_3
- model: merge_model_4
dtype: bfloat16
pad_to_multiple_of: 512
tokenizer_source: base
name: QwQ-32B-Kumo
```
|