Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,6 @@ license: mit
|
|
25 |
- **Model Type:** Transformer-based language model
|
26 |
- **Language(s):** English
|
27 |
- **License:** [MIT License](https://github.com/openai/finetune-transformer-lm/blob/master/LICENSE)
|
28 |
-
- **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Medium](https://huggingface.co/gpt2-medium), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl)
|
29 |
- **Resources for more information:**
|
30 |
- [Research Paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf)
|
31 |
- [OpenAI Blog Post](https://openai.com/blog/language-unsupervised/)
|
@@ -39,7 +38,7 @@ set a seed for reproducibility:
|
|
39 |
|
40 |
```python
|
41 |
>>> from transformers import pipeline, set_seed
|
42 |
-
>>> generator = pipeline('text-generation', model='
|
43 |
>>> set_seed(42)
|
44 |
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
|
45 |
|
@@ -56,8 +55,8 @@ Here is how to use this model in PyTorch:
|
|
56 |
from transformers import OpenAIGPTTokenizer, OpenAIGPTModel
|
57 |
import torch
|
58 |
|
59 |
-
tokenizer = OpenAIGPTTokenizer.from_pretrained("
|
60 |
-
model = OpenAIGPTModel.from_pretrained("
|
61 |
|
62 |
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
|
63 |
outputs = model(**inputs)
|
@@ -70,8 +69,8 @@ and in TensorFlow:
|
|
70 |
```python
|
71 |
from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModel
|
72 |
|
73 |
-
tokenizer = OpenAIGPTTokenizer.from_pretrained("
|
74 |
-
model = TFOpenAIGPTModel.from_pretrained("
|
75 |
|
76 |
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
|
77 |
outputs = model(inputs)
|
@@ -104,7 +103,7 @@ Predictions generated by this model can include disturbing and harmful stereotyp
|
|
104 |
|
105 |
```python
|
106 |
>>> from transformers import pipeline, set_seed
|
107 |
-
>>> generator = pipeline('text-generation', model='
|
108 |
>>> set_seed(42)
|
109 |
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
|
110 |
|
|
|
25 |
- **Model Type:** Transformer-based language model
|
26 |
- **Language(s):** English
|
27 |
- **License:** [MIT License](https://github.com/openai/finetune-transformer-lm/blob/master/LICENSE)
|
|
|
28 |
- **Resources for more information:**
|
29 |
- [Research Paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf)
|
30 |
- [OpenAI Blog Post](https://openai.com/blog/language-unsupervised/)
|
|
|
38 |
|
39 |
```python
|
40 |
>>> from transformers import pipeline, set_seed
|
41 |
+
>>> generator = pipeline('text-generation', model='lgaalves/gpt1')
|
42 |
>>> set_seed(42)
|
43 |
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
|
44 |
|
|
|
55 |
from transformers import OpenAIGPTTokenizer, OpenAIGPTModel
|
56 |
import torch
|
57 |
|
58 |
+
tokenizer = OpenAIGPTTokenizer.from_pretrained("lgaalves/gpt1")
|
59 |
+
model = OpenAIGPTModel.from_pretrained("lgaalves/gpt1")
|
60 |
|
61 |
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
|
62 |
outputs = model(**inputs)
|
|
|
69 |
```python
|
70 |
from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModel
|
71 |
|
72 |
+
tokenizer = OpenAIGPTTokenizer.from_pretrained("lgaalves/gpt1")
|
73 |
+
model = TFOpenAIGPTModel.from_pretrained("lgaalves/gpt1")
|
74 |
|
75 |
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
|
76 |
outputs = model(inputs)
|
|
|
103 |
|
104 |
```python
|
105 |
>>> from transformers import pipeline, set_seed
|
106 |
+
>>> generator = pipeline('text-generation', model='lgaalves/gpt1')
|
107 |
>>> set_seed(42)
|
108 |
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
|
109 |
|