GPT2Preprocessor classkeras_nlp.models.GPT2Preprocessor(
tokenizer, sequence_length=1024, add_start_token=True, add_end_token=True, **kwargs
)
GPT2 preprocessing layer which tokenizes and packs inputs.
This preprocessing layer will do 2 things:
tokenizer."token_ids", "padding_mask", that can
be passed directly to a keras_nlp.models.GPT2Backbone.This layer can be used directly with tf.data.Dataset.map to preprocess
string data in the (x, y, sample_weight) format used by
keras.Model.fit.
The call method of this layer accepts three arguments, x, y, and
sample_weight. x can be a python string or tensor representing a single
segment, a list of python strings representing a batch of single segments,
or a list of tensors representing multiple segments to be packed together.
y and sample_weight are both optional, can have any format, and will be
passed through unaltered.
GPT2Preprocessor forces the input to have only one segment, as GPT2 is
mainly used for generation tasks. For tasks having multi-segment inputs
like "glue/mnli", please use a model designed for classification purposes
such as BERT or RoBERTa.
Arguments
keras_nlp.models.GPT2Tokenizer instance.True, the preprocessor will prepend the tokenizer
start token to each input sequence.True, the preprocessor will append the tokenizer
end token to each input sequence.Call arguments
tf.Tensor or list of python strings.sequence_length of
the layer.Examples
Directly calling the layer on data.
preprocessor = keras_nlp.models.GPT2Preprocessor.from_preset("gpt2_base_en")
# Tokenize and pack a single sentence.
preprocessor("The quick brown fox jumped.")
# Tokenize a batch of single sentences.
preprocessor(["The quick brown fox jumped.", "Call me Ishmael."])
# Custom vocabulary.
features = ["a quick fox.", "a fox quick."]
vocab = {"<|endoftext|>": 0, "a": 4, "Ġquick": 5, "Ġfox": 6}
merges = ["Ġ q", "u i", "c k", "ui ck", "Ġq uick"]
merges += ["Ġ f", "o x", "Ġf ox"]
tokenizer = keras_nlp.models.GPT2Tokenizer(
vocabulary=vocab,
merges=merges,
)
preprocessor = keras_nlp.models.GPT2Preprocessor(tokenizer=tokenizer)
preprocessor("The quick brown fox jumped.")
Mapping with tf.data.Dataset.
preprocessor = keras_nlp.models.GPT2Preprocessor.from_preset("gpt2_base_en")
text = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
label = tf.constant([1, 1])
# Map labeled single sentences.
ds = tf.data.Dataset.from_tensor_slices((text, label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map unlabeled single sentences.
ds = tf.data.Dataset.from_tensor_slices(text)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
from_preset methodGPT2Preprocessor.from_preset(preset, **kwargs)
Instantiate a keras_nlp.models.Preprocessor from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset can be passed as a
one of:
'bert_base_en''kaggle://user/bert/keras/bert_base_en''hf://user/bert_base_en''./bert_base_en'For any Preprocessor subclass, you can run cls.presets.keys() to
list all built-in presets available on the class.
As there are usually multiple preprocessing classes for a given model,
this method should be called on a specific subclass like
keras_nlp.models.BertPreprocessor.from_preset().
Arguments
Examples
# Load a preprocessor for Gemma generation.
preprocessor = keras_nlp.models.GemmaCausalLMPreprocessor.from_preset(
"gemma_2b_en",
)
# Load a preprocessor for Bert classification.
preprocessor = keras_nlp.models.BertPreprocessor.from_preset(
"bert_base_en",
)
| Preset name | Parameters | Description |
|---|---|---|
| gpt2_base_en | 124.44M | 12-layer GPT-2 model where case is maintained. Trained on WebText. |
| gpt2_medium_en | 354.82M | 24-layer GPT-2 model where case is maintained. Trained on WebText. |
| gpt2_large_en | 774.03M | 36-layer GPT-2 model where case is maintained. Trained on WebText. |
| gpt2_extra_large_en | 1.56B | 48-layer GPT-2 model where case is maintained. Trained on WebText. |
| gpt2_base_en_cnn_dailymail | 124.44M | 12-layer GPT-2 model where case is maintained. Finetuned on the CNN/DailyMail summarization dataset. |
tokenizer propertykeras_nlp.models.GPT2Preprocessor.tokenizer
The tokenizer used to tokenize strings.