raygx's picture
Update README.md
01d678a
|
raw
history blame
572 Bytes

This is only a tokenizer.

  • This tokenizer is a PreTrainedTokenizerFast which is trained on raygx/Nepali-Extended-Corpus datasets.
  • This tokenizer is trained from scratch using Tokenizers library.
  • This tokenizer uses
    • Model: Tokenizer(WordPiece(unk_token="[UNK]"))
    • Normalizer: normalizers.Sequence([NFD(),Strip()])
    • Pre-processor: pre_tokenizers.Sequence([Whitespace(),Digits(individual_digits=True), Punctuation()])
    • Post-processor: BertProcessing

Code is available here.