File size: 1,537 Bytes
b13d197
a39b869
 
 
 
 
 
b13d197
 
a39b869
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
language:
- en
- ko
datasets:
- DopeorNope/Robustness_Ko_data-v1
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---

# **SOLAR-tail-10.7B-instruct-v1.0**  

## Model Details

**Model Developers** Kyujin Han (kyujinpy)

**Method**  
Instruction-tuning with [PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0](https://huggingface.co./PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0).  

**Datasets**
datasets: DopeorNope/Robustness_Ko_data-v1(private).  

**Hyperparameters**
(I will update all!)  

# **Model Benchmark**  

## Open leaderboard
- Follow up as [link](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard).  

| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 |
| --- | --- | --- | --- | --- | --- | --- | 
| PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0 | NaN | NaN | NaN | NaN | NaN | NaN |
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | NaN | NaN | NaN | NaN | NaN | NaN |
| jjourney1125/M-SOLAR-10.7B-v1.0 | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 |
| beomi/Yi-Ko-6B | 48.79 | 41.04 | 53.39 | 46.28 | 41.64 | 61.63 |
| mistralai/Mistral-7B-v0.1 | 46.89 | 38.14 | 48.19 | 45.20 | 46.13 | 56.79 |

   
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```

---