Text Generation
Transformers
PyTorch
Safetensors
Japanese
English
llama
Eval Results
text-generation-inference
tianyuz commited on
Commit
0a294b6
1 Parent(s): 96d1690

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -17
README.md CHANGED
@@ -51,23 +51,8 @@ The model is the instruction-tuned version of [`rinna/youri-7b`](https://hugging
51
 
52
  # Benchmarking
53
 
54
- Evaluation experiments suggest that rinna's `youri-7b` series outperforms other open-source Japanese LLMs on Japanese tasks according to our runs.
55
-
56
- | Model | Model type | 4-task score | 6-task score | 8-task score |
57
- | :-- | :-- | :-- | :-- | :-- |
58
- | rinna/youri-7b-instruction | SFT | 83.88 | 80.93 | 63.63 |
59
- | **rinna/youri-7b-chat** | **SFT** | **78.29** | **78.47** | **62.18** |
60
- | matsuo-lab/weblab-10b-instruction-sft | SFT | 78.75 | 75.05 | 59.11 |
61
- | rinna/youri-7b | pre-trained | 73.32 | 74.58 | 58.87 |
62
- | stabilityai/japanese-stablelm-instruct-alpha-7b | SFT | 70.10 | 71.32 | 54.71 |
63
- | elyza/ELYZA-japanese-Llama-2-7b | pre-trained | 71.72 | 69.28 | 53.17 |
64
- | elyza/ELYZA-japanese-Llama-2-7b-instruct | SFT | 70.57 | 68.12 | 53.14 |
65
- | stabilityai/japanese-stablelm-base-alpha-7b | pre-trained | 61.03 | 65.83 | 51.05 |
66
- | matsuo-lab/weblab-10b | pre-trained | 66.33 | 65.58 | 50.74 |
67
- | meta/llama2-7b | pre-trained | 56.33 | 54.80 | 42.97 |
68
- | rinna/japanese-gpt-neox-3.6b | pre-trained | 47.20 | 54.68 | 41.80 |
69
- | rinna/bilingual-gpt-neox-4b | pre-trained | 46.60 | 52.04 | 40.03 |
70
-
71
  ---
72
 
73
  # How to use the model
 
51
 
52
  # Benchmarking
53
 
54
+ Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
55
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ---
57
 
58
  # How to use the model