Text Generation
Transformers
Safetensors
Korean
llama
text-generation-inference
Inference Endpoints
wkshin89 commited on
Commit
1909454
β€’
1 Parent(s): 231e126

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -15
README.md CHANGED
@@ -9,7 +9,7 @@ language:
9
  base_model: beomi/Yi-Ko-6B
10
  ---
11
 
12
- # Yi-Ko-6B-Instruct-v1.1
13
 
14
  ## Model Details
15
 
@@ -20,18 +20,7 @@ base_model: beomi/Yi-Ko-6B
20
  1. [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) πŸ™‡
21
  2. [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) πŸ™‡
22
  3. [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA) πŸ™‡
23
- 4. AIHub 데이터 선별 ν›„ Instruction Format 맞게 λ³€κ²½ ν›„ μ‚¬μš©
24
-
25
- ## Benchmark Results
26
-
27
- ### AI-Harness Evaluation
28
- https://github.com/Beomi/ko-lm-evaluation-harness
29
-
30
- | Model | kobest_boolq | kobest_copa | kobest_hellaswag | kobest_sentineg | korunsmile | pawsx_ko |
31
- | --- | --- | --- | --- | --- | --- | --- |
32
- | | *Zero-shot* ||||||
33
- | Yi-Ko-6B-Instruct-v1.1 | | | | | | |
34
- | Yi-Ko-6B | 0.7070 | 0.7696 | 0.5009 | 0.4044 | 0.3828 | 0.5145 |
35
 
36
  ## Instruction Format
37
  ```python
@@ -47,9 +36,9 @@ https://github.com/Beomi/ko-lm-evaluation-harness
47
  import torch
48
  from transformers import AutoModelForCausalLM, AutoTokenizer
49
 
50
- tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.1")
51
  model = AutoModelForCausalLM.from_pretrained(
52
- "wkshin89/Yi-Ko-6B-Instruct-v1.1",
53
  device_map="auto",
54
  torch_dtype=torch.bfloat16,
55
  )
 
9
  base_model: beomi/Yi-Ko-6B
10
  ---
11
 
12
+ # Yi-Ko-6B-Instruct-v1.1_
13
 
14
  ## Model Details
15
 
 
20
  1. [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) πŸ™‡
21
  2. [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) πŸ™‡
22
  3. [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA) πŸ™‡
23
+ 4. AIHub 데이터 ν™œμš©
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ## Instruction Format
26
  ```python
 
36
  import torch
37
  from transformers import AutoModelForCausalLM, AutoTokenizer
38
 
39
+ tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.1_")
40
  model = AutoModelForCausalLM.from_pretrained(
41
+ "wkshin89/Yi-Ko-6B-Instruct-v1.1_",
42
  device_map="auto",
43
  torch_dtype=torch.bfloat16,
44
  )