DistilBERT
Collection
3 items
•
Updated
DistilBert is a set of language models published by HuggingFace. They are efficient, distilled version of BERT, and are intended for classification and embedding of text, not for text-generation. See the model card below for benchmarks, data sources, and intended use cases.
Weights and Keras model code are released under the Apache 2 License.
Keras and KerasHub can be installed with:
pip install -U -q keras-hub
pip install -U -q keras>=3
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the Keras Getting Started page.
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
Preset name | Parameters | Description |
---|---|---|
distil_bert_base_en_uncased | 66.36M | 6-layer model where all input is lowercased. |
distil_bert_base_en | 65.19M | 6-layer model where case is maintained. |
distil_bert_base_multi | 134.73M | 6-layer multi-linguage model where case is maintained. |
import keras
import keras_hub
import numpy as np
Raw string data.
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
# Use a shorter sequence length.
preprocessor = keras_hub.models.DistilBertPreprocessor.from_preset(
"distil_bert_base_multi",
sequence_length=128,
)
# Pretrained classifier.
classifier = keras_hub.models.DistilBertClassifier.from_preset(
"distil_bert_base_multi",
num_classes=4,
preprocessor=preprocessor,
)
classifier.fit(x=features, y=labels, batch_size=2)
# Re-compile (e.g., with a new learning rate)
classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
Preprocessed integer data.
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2)
}
labels = [0, 3]
# Pretrained classifier without preprocessing.
classifier = keras_hub.models.DistilBertClassifier.from_preset(
"distil_bert_base_multi",
num_classes=4,
preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
import keras
import keras_hub
import numpy as np
Raw string data.
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
# Use a shorter sequence length.
preprocessor = keras_hub.models.DistilBertPreprocessor.from_preset(
"hf://keras/distil_bert_base_multi",
sequence_length=128,
)
# Pretrained classifier.
classifier = keras_hub.models.DistilBertClassifier.from_preset(
"hf://keras/distil_bert_base_multi",
num_classes=4,
preprocessor=preprocessor,
)
classifier.fit(x=features, y=labels, batch_size=2)
# Re-compile (e.g., with a new learning rate)
classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
Preprocessed integer data.
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2)
}
labels = [0, 3]
# Pretrained classifier without preprocessing.
classifier = keras_hub.models.DistilBertClassifier.from_preset(
"hf://keras/distil_bert_base_multi",
num_classes=4,
preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)