et5-typos-corrector / README.md
j5ng's picture
Update README.md
be443f3
|
raw
history blame
2.55 kB
---
language:
- ko
pipeline_tag: text2text-generation
---
## ํ•œ๊ตญ์–ด ๋งž์ถค๋ฒ• ๊ต์ •๊ธฐ(Korean Typos Corrector)
- ETRI-et5 ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ fine-tuningํ•œ ํ•œ๊ตญ์–ด ๊ตฌ์–ด์ฒด ์ „์šฉ ๋งž์ถค๋ฒ• ๊ต์ •๊ธฐ ์ž…๋‹ˆ๋‹ค.
## Base on PLM model(ET5)
- ETRI(https://aiopen.etri.re.kr/et5Model)
## Base on Dataset
- ๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜(https://corpus.korean.go.kr/request/reausetMain.do?lang=ko) ๋งž์ถค๋ฒ• ๊ต์ • ๋ฐ์ดํ„ฐ
## Data Preprocessing
- 1. ํŠน์ˆ˜๋ฌธ์ž ์ œ๊ฑฐ (์‰ผํ‘œ) .(๋งˆ์นจํ‘œ) ์ œ๊ฑฐ
- 2. null ๊ฐ’("") ์ œ๊ฑฐ
- 3. ๋„ˆ๋ฌด ์งง์€ ๋ฌธ์žฅ ์ œ๊ฑฐ(๊ธธ์ด 2 ์ดํ•˜)
- 4. ๋ฌธ์žฅ ๋‚ด &name&, name1 ๋“ฑ ์ด๋ฆ„ ํƒœ๊ทธ๊ฐ€ ํฌํ•จ๋œ ๋‹จ์–ด ์ œ๊ฑฐ(๋‹จ์–ด๋งŒ ์ œ๊ฑฐํ•˜๊ณ  ๋ฌธ์žฅ์€ ์‚ด๋ฆผ)
- total : 318,882 ์Œ
***
## How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
# T5 ๋ชจ๋ธ ๋กœ๋“œ
model = T5ForConditionalGeneration.from_pretrained("j5ng/et5-typos-corrector")
tokenizer = T5Tokenizer.from_pretrained("j5ng/et5-typos-corrector")
device = "cuda:0" if torch.cuda.is_available() else "cpu"
# device = "mps:0" if torch.cuda.is_available() else "cpu" # for mac m1
model = model.to(device)
# ์˜ˆ์‹œ ์ž…๋ ฅ ๋ฌธ์žฅ
input_text = "์•„๋Šฌ ์ง„์งœ ๋ฌดใ…“ํ•˜๋ƒ๊ณ "
# ์ž…๋ ฅ ๋ฌธ์žฅ ์ธ์ฝ”๋”ฉ
input_encoding = tokenizer("๋งž์ถค๋ฒ•์„ ๊ณ ์ณ์ฃผ์„ธ์š”: " + input_text, return_tensors="pt")
input_ids = input_encoding.input_ids.to(device)
attention_mask = input_encoding.attention_mask.to(device)
# T5 ๋ชจ๋ธ ์ถœ๋ ฅ ์ƒ์„ฑ
output_encoding = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_length=128,
num_beams=5,
early_stopping=True,
)
# ์ถœ๋ ฅ ๋ฌธ์žฅ ๋””์ฝ”๋”ฉ
output_text = tokenizer.decode(output_encoding[0], skip_special_tokens=True)
# ๊ฒฐ๊ณผ ์ถœ๋ ฅ
print(output_text) # ์•„๋‹ˆ ์ง„์งœ ๋ญ ํ•˜๋ƒ๊ณ .
```
***
## With Transformer Pipeline
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer, pipeline
model = T5ForConditionalGeneration.from_pretrained('j5ng/et5-typos-corrector')
tokenizer = T5Tokenizer.from_pretrained('j5ng/et5-typos-corrector')
typos_corrector = pipeline(
"text2text-generation",
model=model,
tokenizer=tokenizer,
device=0 if torch.cuda.is_available() else -1,
framework="pt",
)
input_text = "์™„์ฃค ์–ด์ด์—…ใ……๋„ค์ง„์จฌใ…‹ใ…‹ใ…‹"
output_text = typos_corrector("๋งž์ถค๋ฒ•์„ ๊ณ ์ณ์ฃผ์„ธ์š”: " + input_text,
max_length=128,
num_beams=5,
early_stopping=True)[0]['generated_text']
print(output_text) # ์™„์ „ ์–ด์ด์—†๋„ค ์ง„์งœ แ„แ„แ„แ„.
```