--- library_name: keras-hub license: mit language: - en tags: - text-generation --- ## Model Overview GPT-2 is a language model published by OpenAI. Models are fine tuned on WebText, and range in size from 125 million to 1.5 billion parameters. See the model card below for benchmarks, data sources, and intended use cases. Weights are released under the [MIT License](https://opensource.org/license/mit). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE). ## Links * [GPT-2 Quickstart Notebook](https://www.kaggle.com/code/gabrielrasskin/gpt-2-quickstart) * [GPT-2 API Documentation](https://keras.io/api/keras_hub/models/gpt2/) * [GPT-2 Model Card](https://github.com/openai/gpt-2/blob/master/model_card.md) * [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/) * [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/) ## Installation Keras and KerasHub can be installed with: ``` pip install -U -q keras-hub pip install -U -q keras>=3 ``` Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page. ## Presets The following model checkpoints are provided by the Keras team. Full code examples for each are available below. | Preset name | Parameters | Description | |----------------------------|------------|------------------------------------------------------------------------------------------------------| | `gpt2_base_en` | 124.44M | 12-layer GPT-2 model where case is maintained. Trained on WebText. | | `gpt2_medium_en` | 354.82M | 24-layer GPT-2 model where case is maintained. Trained on WebText. | | `gpt2_large_en` | 774.03M | 36-layer GPT-2 model where case is maintained. Trained on WebText. | | `gpt2_extra_large_en` | 1.56B | 48-layer GPT-2 model where case is maintained. Trained on WebText. | | `gpt2_base_en_cnn_dailymail` | 124.44M | 12-layer GPT-2 model where case is maintained. Finetuned on the CNN/DailyMail summarization dataset. | ## Prompts GPT-2 models are fine tuned on WebText. Prompting should follow text completion formatting. See the following for an example: ```python prompt = "Keras is a " ``` would have GPT-2 aim to complete the sentence. ## Example Usage ```python import keras import keras_hub import numpy as np ``` Use `generate()` to do text generation. ```python gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset("gpt2_extra_large_en") gpt2_lm.generate("I want to say", max_length=30) # Generate with batched prompts. gpt2_lm.generate(["This is a", "Where are you"], max_length=30) ``` Compile the `generate()` function with a custom sampler. ```python gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset("gpt2_extra_large_en") gpt2_lm.compile(sampler="greedy") gpt2_lm.generate("I want to say", max_length=30) gpt2_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2)) gpt2_lm.generate("I want to say", max_length=30) ``` Use `generate()` without preprocessing. ```python # Prompt the model with `5338, 318` (the token ids for `"Who is"`). # Use `"padding_mask"` to indicate values that should not be overridden. prompt = { "token_ids": np.array([[5338, 318, 0, 0, 0]] * 2), "padding_mask": np.array([[1, 1, 0, 0, 0]] * 2), } gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset( "gpt2_extra_large_en", preprocessor=None, ) gpt2_lm.generate(prompt) ``` Call `fit()` on a single batch. ```python features = ["The quick brown fox jumped.", "I forgot my homework."] gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset("gpt2_extra_large_en") gpt2_lm.fit(x=features, batch_size=2) ``` Call `fit()` without preprocessing. ```python x = { "token_ids": np.array([[50256, 1, 2, 3, 4]] * 2), "padding_mask": np.array([[1, 1, 1, 1, 1]] * 2), } y = np.array([[1, 2, 3, 4, 50256]] * 2) sw = np.array([[1, 1, 1, 1, 1]] * 2) gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset( "gpt2_extra_large_en", preprocessor=None, ) gpt2_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2) ``` ## Example Usage with Hugging Face URI ```python import keras import keras_hub import numpy as np ``` Use `generate()` to do text generation. ```python gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset("hf://keras/gpt2_extra_large_en") gpt2_lm.generate("I want to say", max_length=30) # Generate with batched prompts. gpt2_lm.generate(["This is a", "Where are you"], max_length=30) ``` Compile the `generate()` function with a custom sampler. ```python gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset("hf://keras/gpt2_extra_large_en") gpt2_lm.compile(sampler="greedy") gpt2_lm.generate("I want to say", max_length=30) gpt2_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2)) gpt2_lm.generate("I want to say", max_length=30) ``` Use `generate()` without preprocessing. ```python # Prompt the model with `5338, 318` (the token ids for `"Who is"`). # Use `"padding_mask"` to indicate values that should not be overridden. prompt = { "token_ids": np.array([[5338, 318, 0, 0, 0]] * 2), "padding_mask": np.array([[1, 1, 0, 0, 0]] * 2), } gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset( "hf://keras/gpt2_extra_large_en", preprocessor=None, ) gpt2_lm.generate(prompt) ``` Call `fit()` on a single batch. ```python features = ["The quick brown fox jumped.", "I forgot my homework."] gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset("hf://keras/gpt2_extra_large_en") gpt2_lm.fit(x=features, batch_size=2) ``` Call `fit()` without preprocessing. ```python x = { "token_ids": np.array([[50256, 1, 2, 3, 4]] * 2), "padding_mask": np.array([[1, 1, 1, 1, 1]] * 2), } y = np.array([[1, 2, 3, 4, 50256]] * 2) sw = np.array([[1, 1, 1, 1, 1]] * 2) gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset( "hf://keras/gpt2_extra_large_en", preprocessor=None, ) gpt2_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2) ```