KoichiYasuoka's picture
add links
c98750c
---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
- "modernbert"
datasets:
- "globis-university/aozorabunko-clean"
- "wikimedia/wikipedia"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# modernbert-large-japanese-char
## Model Description
This is a ModernBERT model pre-trained on Japanese Wikipedia and 青空文庫 texts. NVIDIA A100-SXM4-40GB×8 took 116 hours 46 minutes for training. You can fine-tune `modernbert-large-japanese-char` for downstream tasks, such as [POS-tagging](https://huggingface.co./KoichiYasuoka/modernbert-large-japanese-char-upos), [dependency-parsing](https://huggingface.co./KoichiYasuoka/modernbert-large-japanese-char-ud-triangular), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/modernbert-large-japanese-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/modernbert-large-japanese-char",trust_remote_code=True)
```