XLMRobertaMaskedLM classkeras_nlp.models.XLMRobertaMaskedLM(backbone, preprocessor=None, **kwargs)
An end-to-end XLM-RoBERTa model for the masked language modeling task.
This model will train XLM-RoBERTa on a masked language modeling task.
The model will predict labels for a number of masked tokens in the
input data. For usage of this model with pre-trained weights, see the
from_preset() method.
This model can optionally be configured with a preprocessor layer, in
which case inputs can be raw string features during fit(), predict(),
and evaluate(). Inputs will be tokenized and dynamically masked during
training and evaluation. This is done by default when creating the model
with from_preset().
Disclaimer: Pre-trained models are provided on an "as is" basis, without warranties or conditions of any kind. The underlying model is provided by a third party and subject to a separate license, available here.
Arguments
keras_nlp.models.XLMRobertaBackbone instance.keras_nlp.models.XLMRobertaMaskedLMPreprocessor or
None. If None, this model will not apply preprocessing, and
inputs should be preprocessed before calling the model.Examples
Raw string inputs and pretrained backbone.
# Create a dataset with raw string features. Labels are inferred.
features = ["The quick brown fox jumped.", "I forgot my homework."]
# Pretrained language model
# on an MLM task.
masked_lm = keras_nlp.models.XLMRobertaMaskedLM.from_preset(
"xlm_roberta_base_multi",
)
masked_lm.fit(x=features, batch_size=2)
Re-compile (e.g., with a new learning rate)
.
masked_lm.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
Access backbone programmatically (e.g., to change trainable)
.
masked_lm.backbone.trainable = False Fit again
.
masked_lm.fit(x=features, batch_size=2)
Preprocessed integer data.
```python
__Create a preprocessed dataset where 0 is the mask token__
.
features = {
"token_ids": np.array([[1, 2, 0, 4, 0, 6, 7, 8]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1]] * 2),
"mask_positions": np.array([[2, 4]] * 2)
}
__Labels are the original masked values__
.
labels = [[3, 5]] * 2
masked_lm = keras_nlp.models.XLMRobertaMaskedLM.from_preset(
"xlm_roberta_base_multi",
preprocessor=None,
)
masked_lm.fit(x=features, y=labels, batch_size=2)
from_preset methodXLMRobertaMaskedLM.from_preset(preset, load_weights=True, **kwargs)
Instantiate a keras_nlp.models.Task from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset can be passed as a
one of:
'bert_base_en''kaggle://user/bert/keras/bert_base_en''hf://user/bert_base_en''./bert_base_en'For any Task subclass, you can run cls.presets.keys() to list all
built-in presets available on the class.
This constructor can be called in one of two ways. Either from a task
specific base class like keras_nlp.models.CausalLM.from_preset(), or
from a model class like keras_nlp.models.BertClassifier.from_preset().
If calling from the a base class, the subclass of the returning object
will be inferred from the config in the preset directory.
Arguments
True, the weights will be loaded into the
model architecture. If False, the weights will be randomly
initialized.Examples
# Load a Gemma generative task.
causal_lm = keras_nlp.models.CausalLM.from_preset(
"gemma_2b_en",
)
# Load a Bert classification task.
model = keras_nlp.models.Classifier.from_preset(
"bert_base_en",
num_classes=2,
)
| Preset name | Parameters | Description |
|---|---|---|
| xlm_roberta_base_multi | 277.45M | 12-layer XLM-RoBERTa model where case is maintained. Trained on CommonCrawl in 100 languages. |
| xlm_roberta_large_multi | 558.84M | 24-layer XLM-RoBERTa model where case is maintained. Trained on CommonCrawl in 100 languages. |
backbone propertykeras_nlp.models.XLMRobertaMaskedLM.backbone
A keras_nlp.models.Backbone model with the core architecture.
preprocessor propertykeras_nlp.models.XLMRobertaMaskedLM.preprocessor
A keras_nlp.models.Preprocessor layer used to preprocess input.