File size: 1,645 Bytes
3999629
664fc70
 
 
 
 
 
 
 
3999629
 
664fc70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
language:
  - zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
  - pytorch
  - token-classification
  - bert
  - zh
license: gpl-3.0
---

# CKIP Oldhan BERT Base Chinese WS

This model provides word segmentation for the oldhan Chinese language. Our training dataset covers four eras of the Chinese language.

## Homepage
* [ckiplab/han-transformers](https://github.com/ckiplab/han-transformers)

## Training Datasets
The copyright of the datasets belongs to the Institute of Linguistics, Academia Sinica.
* [中央研究院上古漢語標記語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/akiwi/kiwi.sh?ukey=-406192123&qtype=-1)
* [中央研究院中古漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/dkiwi/kiwi.sh?ukey=852967425&qtype=-1)
* [中央研究院近代漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/pkiwi/kiwi.sh?ukey=-299696128&qtype=-1)
* [中央研究院現代漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/mkiwi/kiwi.sh)

## Contributors
* Chin-Tung Lin at [CKIP](https://ckip.iis.sinica.edu.tw/)

## Usage

* Using our model in your script
    ```python
    from transformers import (
      AutoTokenizer,
      AutoModel,
    )

    tokenizer = AutoTokenizer.from_pretrained("ckiplab/oldhan-bert-base-chinese-ws")
    model = AutoModel.from_pretrained("ckiplab/oldhan-bert-base-chinese-ws")
    ```

* Using our model for inference
    ```python
    >>> from transformers import pipeline
    >>> classifier = pipeline("token-classification", model="ckiplab/oldhan-bert-base-chinese-ws")
    >>> classifier("帝堯曰放勳")

    ```