File size: 597 Bytes
dbbeae1
 
 
 
 
 
 
 
 
 
8dfe839
 
 
dbbeae1
 
 
 
 
 
 
 
 
f641a86
 
 
 
 
03d8158
f641a86
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: cc-by-4.0
language:
- pl
library_name: transformers
tags:
- tokenizer
- fast-tokenizer
- polish
datasets:
- radlab/legal-mc4-pl
- radlab/wikipedia-pl
- radlab/kgr10
- clarin-knext/msmarco-pl
- clarin-knext/fiqa-pl
- clarin-knext/scifact-pl
- clarin-knext/nfcorpus-pl
---

This is polish fast tokenizer.

Number of documents used to train tokenizer:
 - 25 088 398


Sample usge with transformers:

```python
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('radlab/polish-fast-tokenizer')
tokenizer.decode(tokenizer("Ala ma kota i psa").input_ids)

```