|
--- |
|
language: |
|
- en |
|
- ko |
|
datasets: |
|
- DopeorNope/Robustness_Ko_data-v1 |
|
pipeline_tag: text-generation |
|
license: cc-by-nc-sa-4.0 |
|
--- |
|
|
|
# **SOLAR-tail-10.7B-instruct-v1.0** |
|
|
|
## Model Details |
|
|
|
**Model Developers** Kyujin Han (kyujinpy) |
|
|
|
**Method** |
|
Instruction-tuning with [PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0](https://huggingface.co./PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0). |
|
|
|
**Datasets** |
|
datasets: DopeorNope/Robustness_Ko_data-v1(private). |
|
|
|
**Hyperparameters** |
|
(I will update all!) |
|
|
|
# **Model Benchmark** |
|
|
|
## Open leaderboard |
|
- Follow up as [link](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard). |
|
|
|
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 | |
|
| --- | --- | --- | --- | --- | --- | --- | |
|
| PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0 | NaN | NaN | NaN | NaN | NaN | NaN | |
|
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | NaN | NaN | NaN | NaN | NaN | NaN | |
|
| jjourney1125/M-SOLAR-10.7B-v1.0 | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 | |
|
| beomi/Yi-Ko-6B | 48.79 | 41.04 | 53.39 | 46.28 | 41.64 | 61.63 | |
|
| mistralai/Mistral-7B-v0.1 | 46.89 | 38.14 | 48.19 | 45.20 | 46.13 | 56.79 | |
|
|
|
|
|
# Implementation Code |
|
```python |
|
### KO-Platypus |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
|
|
repo = "PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0" |
|
OpenOrca = AutoModelForCausalLM.from_pretrained( |
|
repo, |
|
return_dict=True, |
|
torch_dtype=torch.float16, |
|
device_map='auto' |
|
) |
|
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) |
|
``` |
|
|
|
--- |