DebertaV3Backbone
classkeras_nlp.models.DebertaV3Backbone(
vocabulary_size,
num_layers,
num_heads,
hidden_dim,
intermediate_dim,
dropout=0.1,
max_sequence_length=512,
bucket_size=256,
dtype=None,
**kwargs
)
DeBERTa encoder network.
This network implements a bi-directional Transformer-based encoder as described in "DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing". It includes the embedding lookups and transformer layers, but does not include the enhanced masked decoding head used during pretraining.
The default constructor gives a fully customizable, randomly initialized
DeBERTa encoder with any number of layers, heads, and embedding
dimensions. To load preset architectures and weights, use the from_preset
constructor.
Note: DebertaV3Backbone
has a performance issue on TPUs, and we recommend
other models for TPU training and inference.
Disclaimer: Pre-trained models are provided on an "as is" basis, without warranties or conditions of any kind. The underlying model is provided by a third party and subject to a separate license, available here.
Arguments
max_sequence_length
.max_sequence_length // 2
.keras.mixed_precision.DTypePolicy
. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.Example
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Pretrained DeBERTa encoder.
model = keras_nlp.models.DebertaV3Backbone.from_preset(
"deberta_v3_base_en",
)
model(input_data)
# Randomly initialized DeBERTa encoder with custom config
model = keras_nlp.models.DebertaV3Backbone(
vocabulary_size=128100,
num_layers=12,
num_heads=6,
hidden_dim=384,
intermediate_dim=1536,
max_sequence_length=512,
bucket_size=256,
)
# Call the model on the input data.
model(input_data)
from_preset
methodDebertaV3Backbone.from_preset(preset, load_weights=True, **kwargs)
Instantiate a keras_nlp.models.Backbone
from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset
can be passed as a
one of:
'bert_base_en'
'kaggle://user/bert/keras/bert_base_en'
'hf://user/bert_base_en'
'./bert_base_en'
This constructor can be called in one of two ways. Either from the base
class like keras_nlp.models.Backbone.from_preset()
, or from
a model class like keras_nlp.models.GemmaBackbone.from_preset()
.
If calling from the base class, the subclass of the returning object
will be inferred from the config in the preset directory.
For any Backbone
subclass, you can run cls.presets.keys()
to list
all built-in presets available on the class.
Arguments
True
, the weights will be loaded into the
model architecture. If False
, the weights will be randomly
initialized.Examples
# Load a Gemma backbone with pre-trained weights.
model = keras_nlp.models.Backbone.from_preset(
"gemma_2b_en",
)
# Load a Bert backbone with a pre-trained config and random weights.
model = keras_nlp.models.Backbone.from_preset(
"bert_base_en",
load_weights=False,
)
Preset name | Parameters | Description |
---|---|---|
deberta_v3_extra_small_en | 70.68M | 12-layer DeBERTaV3 model where case is maintained. Trained on English Wikipedia, BookCorpus and OpenWebText. |
deberta_v3_small_en | 141.30M | 6-layer DeBERTaV3 model where case is maintained. Trained on English Wikipedia, BookCorpus and OpenWebText. |
deberta_v3_base_en | 183.83M | 12-layer DeBERTaV3 model where case is maintained. Trained on English Wikipedia, BookCorpus and OpenWebText. |
deberta_v3_large_en | 434.01M | 24-layer DeBERTaV3 model where case is maintained. Trained on English Wikipedia, BookCorpus and OpenWebText. |
deberta_v3_base_multi | 278.22M | 12-layer DeBERTaV3 model where case is maintained. Trained on the 2.5TB multilingual CC100 dataset. |
token_embedding
propertykeras_nlp.models.DebertaV3Backbone.token_embedding
A keras.layers.Embedding
instance for embedding token ids.
This layer embeds integer token ids to the hidden dim of the model.