Skip to content

IsolationForestLearner

IsolationForestLearner

IsolationForestLearner(label: Optional[str] = None, task: Task = ANOMALY_DETECTION, *, weights: Optional[str] = None, ranking_group: Optional[str] = None, uplift_treatment: Optional[str] = None, features: Optional[ColumnDefs] = None, include_all_columns: bool = False, max_vocab_count: int = 2000, min_vocab_frequency: int = 5, discretize_numerical_columns: bool = False, num_discretized_numerical_bins: int = 255, max_num_scanned_rows_to_infer_semantic: int = 100000, max_num_scanned_rows_to_compute_statistics: int = 100000, data_spec: Optional[DataSpecification] = None, max_depth: int = -2, min_examples: int = 5, num_trees: int = 300, pure_serving_model: bool = False, random_seed: int = 123456, sparse_oblique_normalization: Optional[str] = None, sparse_oblique_projection_density_factor: Optional[float] = None, sparse_oblique_weights: Optional[str] = None, split_axis: str = 'AXIS_ALIGNED', subsample_count: Optional[int] = 256, subsample_ratio: Optional[float] = None, working_dir: Optional[str] = None, num_threads: Optional[int] = None, tuner: Optional[AbstractTuner] = None, explicit_args: Optional[Set[str]] = None)

Bases: GenericLearner

Isolation Forest learning algorithm.

An Isolation Forest is a collection of decision trees trained without labels and independently to partition the feature space. The Isolation Forest prediction is an anomaly score that indicates whether an example originates from a same distribution to the training examples. We refer to Isolation Forest as both the original algorithm by Liu et al. and its extensions.

Usage example:

import ydf
import pandas as pd

dataset = pd.read_csv("project/dataset.csv")

model = ydf.IsolationForestLearner().train(dataset)

print(model.describe())

Hyperparameters are configured to give reasonable results for typical datasets. Hyperparameters can also be modified manually (see descriptions) below or by applying the hyperparameter templates available with IsolationForestLearner.hyperparameter_templates() (see this function's documentation for details).

Attributes:

label: Label of the dataset. The label column should not be identified as a feature in the features parameter. task: Task to solve (e.g. Task.CLASSIFICATION, Task.REGRESSION, Task.RANKING, Task.CATEGORICAL_UPLIFT, Task.NUMERICAL_UPLIFT). weights: Name of a feature that identifies the weight of each example. If weights are not specified, unit weights are assumed. The weight column should not be identified as a feature in the features parameter. ranking_group: Only for task=Task.RANKING. Name of a feature that identifies queries in a query/document ranking task. The ranking group should not be identified as a feature in the features parameter. uplift_treatment: Only for task=Task.CATEGORICAL_UPLIFT and task=Task. NUMERICAL_UPLIFT. Name of a numerical feature that identifies the treatment in an uplift problem. The value 0 is reserved for the control treatment. Currently, only 0/1 binary treatments are supported. features: If None, all columns are used as features. The semantic of the features is determined automatically. Otherwise, if include_all_columns=False (default) only the column listed in features are imported. If include_all_columns=True, all the columns are imported as features and only the semantic of the columns NOT in columns is determined automatically. If specified, defines the order of the features - any non-listed features are appended in-order after the specified features (if include_all_columns=True). The label, weights, uplift treatment and ranking_group columns should not be specified as features. include_all_columns: See features. max_vocab_count: Maximum size of the vocabulary of CATEGORICAL and CATEGORICAL_SET columns stored as strings. If more unique values exist, only the most frequent values are kept, and the remaining values are considered as out-of-vocabulary. min_vocab_frequency: Minimum number of occurrence of a value for CATEGORICAL and CATEGORICAL_SET columns. Value observed less than min_vocab_frequency are considered as out-of-vocabulary. discretize_numerical_columns: If true, discretize all the numerical columns before training. Discretized numerical columns are faster to train with, but they can have a negative impact on the model quality. Using discretize_numerical_columns=True is equivalent as setting the column semantic DISCRETIZED_NUMERICAL in the column argument. See the definition of DISCRETIZED_NUMERICAL for more details. num_discretized_numerical_bins: Number of bins used when disretizing numerical columns. max_num_scanned_rows_to_infer_semantic: Number of rows to scan when inferring the column's semantic if it is not explicitly specified. Only used when reading from file, in-memory datasets are always read in full. Setting this to a lower number will speed up dataset reading, but might result in incorrect column semantics. Set to -1 to scan the entire dataset. max_num_scanned_rows_to_compute_statistics: Number of rows to scan when computing a column's statistics. Only used when reading from file, in-memory datasets are always read in full. A column's statistics include the dictionary for categorical features and the mean / min / max for numerical features. Setting this to a lower number will speed up dataset reading, but skew statistics in the dataspec, which can hurt model quality (e.g. if an important category of a categorical feature is considered OOV). Set to -1 to scan the entire dataset. data_spec: Dataspec to be used (advanced). If a data spec is given, columns, include_all_columns, max_vocab_count, min_vocab_frequency, discretize_numerical_columns and num_discretized_numerical_bins will be ignored. max_depth: Maximum depth of the tree. max_depth=1 means that all trees will be roots. max_depth=-1 means that tree depth unconstrained by this parameter. max_depth=-2 means that the maximum depth is log2(number of sampled examples per tree) (default). Default: -2. min_examples: Minimum number of examples in a node. Default: 5. num_trees: Number of individual decision trees. Increasing the number of trees can increase the quality of the model at the expense of size, training speed, and inference latency. Default: 300. pure_serving_model: Clear the model from any information that is not required for model serving. This includes debugging, model interpretation and other meta-data. The size of the serialized model can be reduced significatively (50% model size reduction is common). This parameter has no impact on the quality, serving speed or RAM usage of model serving. Default: False. random_seed: Random seed for the training of the model. Learners are expected to be deterministic by the random seed. Default: 123456. sparse_oblique_normalization: For sparse oblique splits i.e. split_axis=SPARSE_OBLIQUE. Normalization applied on the features, before applying the sparse oblique projections. - NONE: No normalization. - STANDARD_DEVIATION: Normalize the feature by the estimated standard deviation on the entire train dataset. Also known as Z-Score normalization. - MIN_MAX: Normalize the feature by the range (i.e. max-min) estimated on the entire train dataset. Default: None. sparse_oblique_projection_density_factor: Density of the projections as an exponent of the number of features. Independently for each projection, each feature has a probability "projection_density_factor / num_features" to be considered in the projection. The paper "Sparse Projection Oblique Random Forests" (Tomita et al, 2020) calls this parameter lambda and recommends values in [1, 5]. Increasing this value increases training and inference time (on average). This value is best tuned for each dataset. Default: None. sparse_oblique_weights: For sparse oblique splits i.e. split_axis=SPARSE_OBLIQUE. Possible values: - BINARY: The oblique weights are sampled in {-1,1} (default). - CONTINUOUS: The oblique weights are be sampled in [-1,1]. Default: None. split_axis: What structure of split to consider for numerical features. - AXIS_ALIGNED: Axis aligned splits (i.e. one condition at a time). This is the "classical" way to train a tree. Default value. - SPARSE_OBLIQUE: Sparse oblique splits (i.e. random splits on a small number of features) from "Sparse Projection Oblique Random Forests", Tomita et al., 2020. This includes the splits described in "Extended Isolation Forests" (Sahand Hariri et al., 2018). Default: "AXIS_ALIGNED". subsample_count: Number of examples used to grow each tree. Only one of "subsample_ratio" and "subsample_count" can be set. By default, sample 256 examples per tree. Note that this parameter also restricts the tree's maximum depth to log2(examples used per tree) unless max_depth is set explicitly. Default: 256. subsample_ratio: Ratio of number of training examples used to grow each tree. Only one of "subsample_ratio" and "subsample_count" can be set. By default, sample 256 examples per tree. Note that this parameter also restricts the tree's maximum depth to log2(examples used per tree) unless max_depth is set explicitly. Default: None.

working_dir: Path to a directory available for the learning algorithm to store intermediate computation results. Depending on the learning algorithm and parameters, the working_dir might be optional, required, or ignored. For instance, distributed training algorithm always need a "working_dir", and the gradient boosted tree and hyper-parameter tuners will export artefacts to the "working_dir" if provided. num_threads: Number of threads used to train the model. Different learning algorithms use multi-threading differently and with different degree of efficiency. If None, num_threads will be automatically set to the number of processors (up to a maximum of 32; or set to 6 if the number of processors is not available). Making num_threads significantly larger than the number of processors can slow-down the training speed. The default value logic might change in the future. tuner: If set, automatically select the best hyperparameters using the provided tuner. When using distributed training, the tuning is distributed. explicit_args: Helper argument for internal use. Throws if supplied explicitly by the user.

hyperparameters property

hyperparameters: HyperParameters

A (mutable) dictionary of this learner's hyperparameters.

This object can be used to inspect or modify hyperparameters after creating the learner. Modifying hyperparameters after constructing the learner is suitable for some advanced use cases. Since this approach bypasses some feasibility checks for the given set of hyperparameters, it generally better to re-create the learner for each model. The current set of hyperparameters can be validated manually with validate_hyperparameters().

cross_validation

cross_validation(ds: InputDataset, folds: int = 10, bootstrapping: Union[bool, int] = False, parallel_evaluations: int = 1) -> Evaluation

Cross-validates the learner and return the evaluation.

Usage example:

import pandas as pd
import ydf

dataset = pd.read_csv("my_dataset.csv")
learner = ydf.RandomForestLearner(label="label")
evaluation = learner.cross_validation(dataset)

# In a notebook, display an interractive evaluation
evaluation

# Print the evaluation
print(evaluation)

# Look at specific metrics
print(evaluation.accuracy)

Parameters:

Name Type Description Default
ds InputDataset

Dataset for the cross-validation.

required
folds int

Number of cross-validation folds.

10
bootstrapping Union[bool, int]

Controls whether bootstrapping is used to evaluate the confidence intervals and statistical tests (i.e., all the metrics ending with "[B]"). If set to false, bootstrapping is disabled. If set to true, bootstrapping is enabled and 2000 bootstrapping samples are used. If set to an integer, it specifies the number of bootstrapping samples to use. In this case, if the number is less than 100, an error is raised as bootstrapping will not yield useful results.

False
parallel_evaluations int

Number of model to train and evaluate in parallel using multi-threading. Note that each model is potentially already trained with multithreading (see num_threads argument of Learner constructor).

1

Returns:

Type Description
Evaluation

The cross-validation evaluation.

hyperparameter_templates classmethod

hyperparameter_templates() -> Dict[str, HyperparameterTemplate]

Hyperparameter templates for this Learner.

This learner currently does not provide any hyperparameter templates, this method is provided for consistency with other learners.

Returns:

Type Description
Dict[str, HyperparameterTemplate]

Empty dictionary.

train

train(ds: InputDataset, valid: Optional[InputDataset] = None, verbose: Optional[Union[int, bool]] = None) -> IsolationForestModel

Trains a model on the given dataset.

Options for dataset reading are given on the learner. Consult the documentation of the learner or ydf.create_vertical_dataset() for additional information on dataset reading in YDF.

Usage example:

import ydf
import pandas as pd

train_ds = pd.read_csv(...)

learner = ydf.IsolationForestLearner(label="label")
model = learner.train(train_ds)
print(model.summary())

If training is interrupted (for example, by interrupting the cell execution in Colab), the model will be returned to the state it was in at the moment of interruption.

Parameters:

Name Type Description Default
ds InputDataset

Training dataset.

required
valid Optional[InputDataset]

Optional validation dataset. Some learners, such as Random Forest, do not need validation dataset. Some learners, such as GradientBoostedTrees, automatically extract a validation dataset from the training dataset if the validation dataset is not provided.

None
verbose Optional[Union[int, bool]]

Verbose level during training. If None, uses the global verbose level of ydf.verbose. Levels are: 0 of False: No logs, 1 or True: Print a few logs in a notebook; prints all the logs in a terminal. 2: Prints all the logs on all surfaces.

None

Returns:

Type Description
IsolationForestModel

A trained model.

validate_hyperparameters

validate_hyperparameters()

Returns None if the hyperparameters are valid, raises otherwise.

This method is called automatically before training, but users may call it to fail early. It makes sense to call this method when changing manually the hyper-paramters of the learner. This is a relatively advanced approach that is not recommende (it is better to re-create the learner in most cases).

Usage example:

import ydf
import pandas as pd

train_ds = pd.read_csv(...)

learner = ydf.GradientBoostedTreesLearner(label="label")
learner.hyperparameters["max_depth"] = 20
learner.validate_hyperparameters()
model = learner.train(train_ds)
evaluation = model.evaluate(test_ds)