Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# EZO model card
|
2 |

|
3 |
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
|
@@ -42,7 +50,7 @@ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
|
|
42 |
print(tokenizer.decode(outputs[0]))
|
43 |
```
|
44 |
|
45 |
-
Template
|
46 |
```
|
47 |
<bos><start_of_turn>user
|
48 |
Write a hello world program<end_of_turn>
|
@@ -52,9 +60,10 @@ XXXXXX<end_of_turn><eos>
|
|
52 |
|
53 |
### Model Data
|
54 |
Information about the data used for model training and how it was processed.
|
|
|
55 |
#### Training Dataset
|
56 |
-
We extracted high-quality data from Japanese Wikipedia and FineWeb to create instruction data.
|
57 |
-
日本語のWikiデータおよび、FineWebから良質なデータのみを抽出し、Instruction
|
58 |
https://huggingface.co/datasets/legacy-datasets/wikipedia
|
59 |
https://huggingface.co/datasets/HuggingFaceFW/fineweb
|
60 |
|
|
|
1 |
+
---
|
2 |
+
license: gemma
|
3 |
+
library_name: transformers
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
tags:
|
6 |
+
- conversational
|
7 |
+
---
|
8 |
+
|
9 |
# EZO model card
|
10 |

|
11 |
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
|
|
|
50 |
print(tokenizer.decode(outputs[0]))
|
51 |
```
|
52 |
|
53 |
+
### Template
|
54 |
```
|
55 |
<bos><start_of_turn>user
|
56 |
Write a hello world program<end_of_turn>
|
|
|
60 |
|
61 |
### Model Data
|
62 |
Information about the data used for model training and how it was processed.
|
63 |
+
|
64 |
#### Training Dataset
|
65 |
+
We extracted high-quality data from Japanese Wikipedia and FineWeb to create instruction data. Our innovative training approach allows for performance improvements across various languages and domains, making the model suitable for global use despite its focus on Japanese data.
|
66 |
+
日本語のWikiデータおよび、FineWebから良質なデータのみを抽出し、Instructionデータを作成しました。このモデルでは日本語に特化させていますが、世界中のどんなユースケースでも利用可能なアプローチです。
|
67 |
https://huggingface.co/datasets/legacy-datasets/wikipedia
|
68 |
https://huggingface.co/datasets/HuggingFaceFW/fineweb
|
69 |
|