KoichiYasuoka
commited on
Commit
•
cf467b5
1
Parent(s):
a9eec7b
see also SuPar-Kanbun
Browse files
README.md
CHANGED
@@ -24,3 +24,8 @@ from transformers import AutoTokenizer,AutoModelForMaskedLM
|
|
24 |
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
|
25 |
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
|
26 |
```
|
|
|
|
|
|
|
|
|
|
|
|
24 |
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
|
25 |
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
|
26 |
```
|
27 |
+
|
28 |
+
## See Also
|
29 |
+
|
30 |
+
[SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
|
31 |
+
|