File size: 1,542 Bytes
cd7dbcb
 
 
c53626a
cd7dbcb
 
 
 
 
 
 
 
 
 
 
 
 
c53626a
 
 
 
cd7dbcb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
language:
- en
base_model: meta-llama/Meta-Llama-3-8B-Instruct
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- moe
- code
license: other
license_name: llama3
license_link: LICENSE
---


![image/png](https://cdn-uploads.huggingface.co/production/uploads/657eb5b256c9c67605a6e8b5/8JXktjAyUPCWQGnRExiVI.png)

# Aplite-Instruct-4x8B-Llama-3

Aplite-Instruct-4x8B-Llama-3 is a experimental MoE (Mixture of Experts) model based on the Llama-3 architecture using Mergekit.

## Disclaimer

This model is a research experiment and may generate incorrect or harmful content. The model's outputs should not be taken as factual or representative of the views of the model's creator or any other individual.

The model's creator is not responsible for any harm or damage caused by the model's outputs.

## Merge Details

```
base_model: Meta-Llama-3-8B-Instruct
experts:
  - source_model: Meta-Llama-3-8B-Instruct
    positive_prompts:
    - "explain"
    - "chat"
    - "assistant"
  - source_model: Llama3-8B-OpenHermes-DPO
    positive_prompts:
    - "python"
    - "math"
    - "solve"
    - "code"
  - source_model: Llama-3-SLERP-8B
    positive_prompts:
    - "chat"
    - "assistant"
    - "AI"
  - source_model: hf-llama3-8b-orpo-v0.0
    positive_prompts:
    - "think"
    - "chat"
    - "code"
    - "roleplay"
gate_mode: hidden
dtype: float16
```

## Join out Discord

If you'd like to discuss potential collaborations or applications, feel free to reach out to me on Discord: [insert Discord link here]