DistributedGradientBoostedTreesLearner
DistributedGradientBoostedTreesLearner
DistributedGradientBoostedTreesLearner(label: str, task: Task = CLASSIFICATION, *, weights: Optional[str] = None, ranking_group: Optional[str] = None, uplift_treatment: Optional[str] = None, features: Optional[ColumnDefs] = None, include_all_columns: bool = False, max_vocab_count: int = 2000, min_vocab_frequency: int = 5, discretize_numerical_columns: bool = False, num_discretized_numerical_bins: int = 255, max_num_scanned_rows_to_infer_semantic: int = 100000, max_num_scanned_rows_to_compute_statistics: int = 100000, data_spec: Optional[DataSpecification] = None, apply_link_function: bool = True, force_numerical_discretization: bool = False, max_depth: int = 6, max_unique_values_for_discretized_numerical: int = 16000, maximum_model_size_in_memory_in_bytes: float = -1.0, maximum_training_duration_seconds: float = -1.0, min_examples: int = 5, num_candidate_attributes: Optional[int] = -1, num_candidate_attributes_ratio: Optional[float] = None, num_trees: int = 300, pure_serving_model: bool = False, random_seed: int = 123456, shrinkage: float = 0.1, use_hessian_gain: bool = False, worker_logs: bool = True, workers: Optional[Sequence[str]] = None, resume_training: bool = False, resume_training_snapshot_interval_seconds: int = 1800, working_dir: Optional[str] = None, num_threads: Optional[int] = None, tuner: Optional[AbstractTuner] = None, explicit_args: Optional[Set[str]] = None)
Bases: GenericLearner
Distributed Gradient Boosted Trees learning algorithm.
Exact distributed version of the Gradient Boosted Tree learning algorithm. See the documentation of the non-distributed Gradient Boosted Tree learning algorithm for an introduction to GBTs.
Usage example:
import ydf
import pandas as pd
dataset = pd.read_csv("project/dataset.csv")
model = ydf.DistributedGradientBoostedTreesLearner().train(dataset)
print(model.describe())
Hyperparameters are configured to give reasonable results for typical
datasets. Hyperparameters can also be modified manually (see descriptions)
below or by applying the hyperparameter templates available with
DistributedGradientBoostedTreesLearner.hyperparameter_templates()
(see this function's documentation for
details).
Attributes:
label: Label of the dataset. The label column
should not be identified as a feature in the features
parameter.
task: Task to solve (e.g. Task.CLASSIFICATION, Task.REGRESSION,
Task.RANKING, Task.CATEGORICAL_UPLIFT, Task.NUMERICAL_UPLIFT).
weights: Name of a feature that identifies the weight of each example. If
weights are not specified, unit weights are assumed. The weight column
should not be identified as a feature in the features
parameter.
ranking_group: Only for task=Task.RANKING
. Name of a feature
that identifies queries in a query/document ranking task. The ranking
group should not be identified as a feature in the features
parameter.
uplift_treatment: Only for task=Task.CATEGORICAL_UPLIFT
and task=Task
.
NUMERICAL_UPLIFT. Name of a numerical feature that identifies the
treatment in an uplift problem. The value 0 is reserved for the control
treatment. Currently, only 0/1 binary treatments are supported.
features: If None, all columns are used as features. The semantic of the
features is determined automatically. Otherwise, if
include_all_columns=False (default) only the column listed in features
are imported. If include_all_columns=True, all the columns are imported as
features and only the semantic of the columns NOT in columns
is
determined automatically. If specified, defines the order of the features
- any non-listed features are appended in-order after the specified
features (if include_all_columns=True).
The label, weights, uplift treatment and ranking_group columns should not
be specified as features.
include_all_columns: See features
.
max_vocab_count: Maximum size of the vocabulary of CATEGORICAL and
CATEGORICAL_SET columns stored as strings. If more unique values exist,
only the most frequent values are kept, and the remaining values are
considered as out-of-vocabulary.
min_vocab_frequency: Minimum number of occurrence of a value for CATEGORICAL
and CATEGORICAL_SET columns. Value observed less than
min_vocab_frequency
are considered as out-of-vocabulary.
discretize_numerical_columns: If true, discretize all the numerical columns
before training. Discretized numerical columns are faster to train with,
but they can have a negative impact on the model quality. Using
discretize_numerical_columns=True
is equivalent as setting the column
semantic DISCRETIZED_NUMERICAL in the column
argument. See the
definition of DISCRETIZED_NUMERICAL for more details.
num_discretized_numerical_bins: Number of bins used when disretizing
numerical columns.
max_num_scanned_rows_to_infer_semantic: Number of rows to scan when
inferring the column's semantic if it is not explicitly specified. Only
used when reading from file, in-memory datasets are always read in full.
Setting this to a lower number will speed up dataset reading, but might
result in incorrect column semantics. Set to -1 to scan the entire
dataset.
max_num_scanned_rows_to_compute_statistics: Number of rows to scan when
computing a column's statistics. Only used when reading from file,
in-memory datasets are always read in full. A column's statistics include
the dictionary for categorical features and the mean / min / max for
numerical features. Setting this to a lower number will speed up dataset
reading, but skew statistics in the dataspec, which can hurt model quality
(e.g. if an important category of a categorical feature is considered
OOV). Set to -1 to scan the entire dataset.
data_spec: Dataspec to be used (advanced). If a data spec is given,
columns
, include_all_columns
, max_vocab_count
,
min_vocab_frequency
, discretize_numerical_columns
and
num_discretized_numerical_bins
will be ignored.
apply_link_function: If true, applies the link function (a.k.a. activation
function), if any, before returning the model prediction. If false,
returns the pre-link function model output.
For example, in the case of binary classification, the pre-link function
output is a logic while the post-link function is a probability. Default:
True.
force_numerical_discretization: If false, only the numerical column
safisfying "max_unique_values_for_discretized_numerical" will be
discretized. If true, all the numerical columns will be discretized.
Columns with more than "max_unique_values_for_discretized_numerical"
unique values will be approximated with
"max_unique_values_for_discretized_numerical" bins. This parameter will
impact the model training. Default: False.
max_depth: Maximum depth of the tree. max_depth=1
means that all trees
will be roots. max_depth=-1
means that tree depth is not restricted by
this parameter. Values <= -2 will be ignored. Default: 6.
max_unique_values_for_discretized_numerical: Maximum number of unique value
of a numerical feature to allow its pre-discretization. In case of large
datasets, discretized numerical features with a small number of unique
values are more efficient to learn than classical / non-discretized
numerical features. This parameter does not impact the final model.
However, it can speed-up or slown the training. Default: 16000.
maximum_model_size_in_memory_in_bytes: Limit the size of the model when
stored in ram. Different algorithms can enforce this limit differently.
Note that when models are compiled into an inference, the size of the
inference engine is generally much smaller than the original model.
Default: -1.0.
maximum_training_duration_seconds: Maximum training duration of the model
expressed in seconds. Each learning algorithm is free to use this
parameter at it sees fit. Enabling maximum training duration makes the
model training non-deterministic. Default: -1.0.
min_examples: Minimum number of examples in a node. Default: 5.
num_candidate_attributes: Number of unique valid attributes tested for each
node. An attribute is valid if it has at least a valid split. If
num_candidate_attributes=0
, the value is set to the classical default
value for Random Forest: sqrt(number of input attributes)
in case of
classification and number_of_input_attributes / 3
in case of
regression. If num_candidate_attributes=-1
, all the attributes are
tested. Default: -1.
num_candidate_attributes_ratio: Ratio of attributes tested at each node. If
set, it is equivalent to num_candidate_attributes =
number_of_input_features x num_candidate_attributes_ratio
. The possible
values are between ]0, and 1] as well as -1. If not set or equal to -1,
the num_candidate_attributes
is used. Default: None.
num_trees: Maximum number of decision trees. The effective number of
trained tree can be smaller if early stopping is enabled. Default: 300.
pure_serving_model: Clear the model from any information that is not
required for model serving. This includes debugging, model interpretation
and other meta-data. The size of the serialized model can be reduced
significatively (50% model size reduction is common). This parameter has
no impact on the quality, serving speed or RAM usage of model serving.
Default: False.
random_seed: Random seed for the training of the model. Learners are
expected to be deterministic by the random seed. Default: 123456.
shrinkage: Coefficient applied to each tree prediction. A small value
(0.02) tends to give more accurate results (assuming enough trees are
trained), but results in larger models. Analogous to neural network
learning rate. Fixed to 1.0 for DART models. Default: 0.1.
use_hessian_gain: Use true, uses a formulation of split gain with a hessian
term i.e. optimizes the splits to minimize the variance of "gradient /
hessian. Available for all losses except regression. Default: False.
worker_logs: If true, workers will print training logs. Default: True.
workers: If set, enable distributed training. "workers" is the list of IP
addresses of the workers. A worker is a process running
ydf.start_worker(port)
.
resume_training: If true, the model training resumes from the checkpoint
stored in the working_dir
directory. If working_dir
does not
contain any model checkpoint, the training starts from the beginning.
Resuming training is useful in the following situations: (1) The training
was interrupted by the user (e.g. ctrl+c or "stop" button in a notebook)
or rescheduled, or (2) the hyper-parameter of the learner was changed e.g.
increasing the number of trees.
resume_training_snapshot_interval_seconds: Indicative number of seconds in
between snapshots when resume_training=True
. Might be ignored by
some learners.
working_dir: Path to a directory available for the learning algorithm to
store intermediate computation results. Depending on the learning
algorithm and parameters, the working_dir might be optional, required, or
ignored. For instance, distributed training algorithm always need a
"working_dir", and the gradient boosted tree and hyper-parameter tuners
will export artefacts to the "working_dir" if provided.
num_threads: Number of threads used to train the model. Different learning
algorithms use multi-threading differently and with different degree of
efficiency. If None
, num_threads
will be automatically set to the
number of processors (up to a maximum of 32; or set to 6 if the number of
processors is not available). Making num_threads
significantly larger
than the number of processors can slow-down the training speed. The
default value logic might change in the future.
tuner: If set, automatically select the best hyperparameters using the
provided tuner. When using distributed training, the tuning is
distributed.
explicit_args: Helper argument for internal use. Throws if supplied
explicitly by the user.
hyperparameters
property
A (mutable) dictionary of this learner's hyperparameters.
This object can be used to inspect or modify hyperparameters after creating
the learner. Modifying hyperparameters after constructing the learner is
suitable for some advanced use cases. Since this approach bypasses some
feasibility checks for the given set of hyperparameters, it generally better
to re-create the learner for each model. The current set of hyperparameters
can be validated manually with validate_hyperparameters()
.
cross_validation
cross_validation(ds: InputDataset, folds: int = 10, bootstrapping: Union[bool, int] = False, parallel_evaluations: int = 1) -> Evaluation
Cross-validates the learner and return the evaluation.
Usage example:
import pandas as pd
import ydf
dataset = pd.read_csv("my_dataset.csv")
learner = ydf.RandomForestLearner(label="label")
evaluation = learner.cross_validation(dataset)
# In a notebook, display an interractive evaluation
evaluation
# Print the evaluation
print(evaluation)
# Look at specific metrics
print(evaluation.accuracy)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
InputDataset
|
Dataset for the cross-validation. |
required |
folds
|
int
|
Number of cross-validation folds. |
10
|
bootstrapping
|
Union[bool, int]
|
Controls whether bootstrapping is used to evaluate the confidence intervals and statistical tests (i.e., all the metrics ending with "[B]"). If set to false, bootstrapping is disabled. If set to true, bootstrapping is enabled and 2000 bootstrapping samples are used. If set to an integer, it specifies the number of bootstrapping samples to use. In this case, if the number is less than 100, an error is raised as bootstrapping will not yield useful results. |
False
|
parallel_evaluations
|
int
|
Number of model to train and evaluate in parallel
using multi-threading. Note that each model is potentially already
trained with multithreading (see |
1
|
Returns:
Type | Description |
---|---|
Evaluation
|
The cross-validation evaluation. |
hyperparameter_templates
classmethod
train
train(ds: InputDataset, valid: Optional[InputDataset] = None, verbose: Optional[Union[int, bool]] = None) -> GradientBoostedTreesModel
Trains a model on the given dataset.
Options for dataset reading are given on the learner. Consult the documentation of the learner or ydf.create_vertical_dataset() for additional information on dataset reading in YDF.
Usage example:
import ydf
import pandas as pd
train_ds = pd.read_csv(...)
learner = ydf.DistributedGradientBoostedTreesLearner(label="label")
model = learner.train(train_ds)
print(model.summary())
If training is interrupted (for example, by interrupting the cell execution in Colab), the model will be returned to the state it was in at the moment of interruption.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
InputDataset
|
Training dataset. |
required |
valid
|
Optional[InputDataset]
|
Optional validation dataset. Some learners, such as Random Forest, do not need validation dataset. Some learners, such as GradientBoostedTrees, automatically extract a validation dataset from the training dataset if the validation dataset is not provided. |
None
|
verbose
|
Optional[Union[int, bool]]
|
Verbose level during training. If None, uses the global verbose
level of |
None
|
Returns:
Type | Description |
---|---|
GradientBoostedTreesModel
|
A trained model. |
validate_hyperparameters
Returns None if the hyperparameters are valid, raises otherwise.
This method is called automatically before training, but users may call it to fail early. It makes sense to call this method when changing manually the hyper-paramters of the learner. This is a relatively advanced approach that is not recommende (it is better to re-create the learner in most cases).
Usage example: