KoELECTRA v3 (Base Discriminator)

Pretrained ELECTRA Language Model for Korean (koelectra-base-v3-discriminator)

For more detail, please see original repository.

Usage

Load model and tokenizer

>>> from transformers import ElectraModel, ElectraTokenizer

>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v3-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")

Tokenizer example

>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
>>> tokenizer.tokenize("[CLS] ํ•œ๊ตญ์–ด ELECTRA๋ฅผ ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค. [SEP]")
['[CLS]', 'ํ•œ๊ตญ์–ด', 'EL', '##EC', '##TRA', '##๋ฅผ', '๊ณต์œ ', '##ํ•ฉ๋‹ˆ๋‹ค', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', 'ํ•œ๊ตญ์–ด', 'EL', '##EC', '##TRA', '##๋ฅผ', '๊ณต์œ ', '##ํ•ฉ๋‹ˆ๋‹ค', '.', '[SEP]'])
[2, 11229, 29173, 13352, 25541, 4110, 7824, 17788, 18, 3]

Example using ElectraForPreTraining

import torch
from transformers import ElectraForPreTraining, ElectraTokenizer

discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-v3-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")

sentence = "๋‚˜๋Š” ๋ฐฉ๊ธˆ ๋ฐฅ์„ ๋จน์—ˆ๋‹ค."
fake_sentence = "๋‚˜๋Š” ๋‚ด์ผ ๋ฐฅ์„ ๋จน์—ˆ๋‹ค."

fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")

discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)

print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
Downloads last month
42,102
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for monologg/koelectra-base-v3-discriminator

Finetunes
4 models

Space using monologg/koelectra-base-v3-discriminator 1