tianyuz commited on
Commit
2c3f4b2
2 Parent(s): 7ca3df4 2cca4df

Merge branch 'main' of https://huggingface.co./rinna/japanese-gpt2-xsmall into main

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ja
3
+ thumbnail: https://github.com/rinnakk/japanese-gpt2/blob/master/rinna.png
4
+ tags:
5
+ - ja
6
+ - japanese
7
+ - gpt2
8
+ - text-generation
9
+ - lm
10
+ - nlp
11
+ license: mit
12
+ datasets:
13
+ - cc100
14
+ - wikipedia
15
+ ---
16
+
17
+ # japanese-gpt2-xsmall
18
+
19
+ ![rinna-icon](./rinna.png)
20
+
21
+ This repository provides an extra-small-sized Japanese GPT-2 model. The model is provided by [rinna](https://corp.rinna.co.jp/).
22
+
23
+ # How to use the model
24
+
25
+ *NOTE:* Use `T5Tokenizer` to initiate the tokenizer.
26
+
27
+ ~~~~
28
+ from transformers import T5Tokenizer, GPT2LMHeadModel
29
+
30
+ tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt2-small")
31
+ tokenizer.do_lower_case = True # due to some bug of tokenizer config loading
32
+
33
+ model = GPT2LMHeadModel.from_pretrained("rinna/japanese-gpt2-small")
34
+ ~~~~
35
+
36
+ # Model architecture
37
+ A 6-layer, 512-hidden-size transformer-based language model.
38
+
39
+ # Training
40
+ The model was trained on [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/) to optimize a traditional language modelling objective on 8\\*V100 GPUs for around 4 days. It reaches around 28 perplexity on a chosen validation set from CC-100.
41
+
42
+ # Tokenization
43
+ The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer, the vocabulary was trained on the Japanese Wikipedia using the official sentencepiece training script.
44
+
45
+ # Licenese
46
+ [The MIT license](https://opensource.org/licenses/MIT)