|
--- |
|
language: vi |
|
tags: |
|
- vi |
|
- vietnamese |
|
- gpt2 |
|
- text-generation |
|
- lm |
|
- nlp |
|
datasets: |
|
- oscar |
|
widget: |
|
- text: "生命、宇宙、そして万物についての究極の疑問の答えは" |
|
--- |
|
|
|
# GPT-2 |
|
|
|
Pretrained model on Vietnamese language using a causal language modeling (CLM) objective. It was introduced in |
|
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) |
|
and first released at [this page](https://openai.com/blog/better-language-models/). |
|
|
|
# How to use the model |
|
|
|
~~~~ |
|
from transformers import GPT2Tokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = GPT2Tokenizer.from_pretrained("nhanv/vi-gpt2") |
|
|
|
model = AutoModelForCausalLM.from_pretrained("nhanv/vi-gpt2") |
|
~~~~ |
|
|
|
# Model architecture |
|
A 12-layer, 768-hidden-size transformer-based language model. |
|
|