File size: 1,171 Bytes
7f27b9a
6eac301
7f27b9a
 
 
 
 
 
 
 
 
 
 
7473993
7f27b9a
7473993
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
base_model: mlabonne/Chimera-8B
library_name: transformers
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# mlabonne/Chimera-8B AWQ

- Model creator: [mlabonne](https://huggingface.co./mlabonne)
- Original model: [Chimera-8B](https://huggingface.co./mlabonne/Chimera-8B)

## Model Summary

Dare-ties merge method.

List of all models and merging path is coming soon.

Merging the "thick"est model weights from mistral models using amazing training methods like direct preference optimization (dpo) and reinforced learning. 

I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers, 
and optimized code until i achieved the best possible results.

Thank you openchat 3.5 for showing me the way.

Here is my contribution.

## Prompt Template

Replace {system} with your system prompt, and {prompt} with your prompt instruction.

```
### System:
{system}

### User:
{prompt}

### Assistant:
```