metadata
language: ja
thumbnail: https://github.com/rinnakk/japanese-gpt2/blob/master/rinna.png
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
license: mit
datasets:
- cc100
- wikipedia
widget:
- text: 生命、宇宙、そして万物についての究極の疑問の答えは
japanese-gpt2-xsmall
This repository provides an extra-small-sized Japanese GPT-2 model. The model was trained using code from Github repository rinnakk/japanese-pretrained-models by rinna Co., Ltd.
How to use the model
NOTE: Use T5Tokenizer
to initiate the tokenizer.
from transformers import T5Tokenizer, GPT2LMHeadModel
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt2-small")
tokenizer.do_lower_case = True # due to some bug of tokenizer config loading
model = GPT2LMHeadModel.from_pretrained("rinna/japanese-gpt2-small")
Model architecture
A 6-layer, 512-hidden-size transformer-based language model.
Training
The model was trained on Japanese CC-100 and Japanese Wikipedia to optimize a traditional language modelling objective on 8\*V100 GPUs for around 4 days. It reaches around 28 perplexity on a chosen validation set from CC-100.
Tokenization
The model uses a sentencepiece-based tokenizer, the vocabulary was trained on the Japanese Wikipedia using the official sentencepiece training script.