aqua-qwen-0.1-110B
This model was created by merging 2 models using the linear DARE merge method using mergekit. The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
name: aqua-qwen-0.1-110B
base_model:
model:
path: cognitivecomputations/dolphin-2.9.1-qwen-110b
dtype: bfloat16
merge_method: dare_linear
parameters:
normalize: 1.0
slices:
- sources:
- model: cognitivecomputations/dolphin-2.9.1-qwen-110b
layer_range: [0, 80]
parameters:
weight: 0.6
- model: Qwen/Qwen1.5-110B-Chat
layer_range: [0, 80]
parameters:
weight: 0.4
Usage
It is recommended to use GGUF version of the model available here
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.