{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Scale XGBoost\n",
"=============\n",
"\n",
"Dask and XGBoost can work together to train gradient boosted trees in parallel. This notebook shows how to use Dask and XGBoost together.\n",
"\n",
"XGBoost provides a powerful prediction framework, and it works well in practice. It wins Kaggle contests and is popular in industry because it has good performance and can be easily interpreted (i.e., it's easy to find the important features from a XGBoost model).\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Dask\n",
"We setup a Dask client, which provides performance and progress metrics via the dashboard.\n",
"\n",
"You can view the dashboard by clicking the link after running the cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dask.distributed import Client\n",
"\n",
"client = Client(n_workers=4, threads_per_worker=1)\n",
"client"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we create a bunch of synthetic data, with 100,000 examples and 20 features."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dask_ml.datasets import make_classification\n",
"\n",
"X, y = make_classification(n_samples=100000, n_features=20,\n",
" chunks=1000, n_informative=4,\n",
" random_state=0)\n",
"X"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Dask-XGBoost works with both arrays and dataframes. For more information on creating dask arrays and dataframes from real data, see documentation on [Dask arrays](https://dask.pydata.org/en/latest/array-creation.html) or [Dask dataframes](https://dask.pydata.org/en/latest/dataframe-create.html)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Split data for training and testing\n",
"We split our dataset into training and testing data to aid evaluation by making sure we have a fair test:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dask_ml.model_selection import train_test_split\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's try to do something with this data using [dask-xgboost][dxgb].\n",
"\n",
"[dxgb]:https://github.com/dask/dask-xgboost"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train Dask-XGBoost"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import dask\n",
"import xgboost\n",
"import dask_xgboost"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"dask-xgboost is a small wrapper around xgboost. Dask sets XGBoost up, gives XGBoost data and lets XGBoost do it's training in the background using all the workers Dask has available."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's do some training:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"params = {'objective': 'binary:logistic',\n",
" 'max_depth': 4, 'eta': 0.01, 'subsample': 0.5, \n",
" 'min_child_weight': 0.5}\n",
"\n",
"bst = dask_xgboost.train(client, params, X_train, y_train, num_boost_round=10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Visualize results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `bst` object is a regular `xgboost.Booster` object. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bst"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This means all the methods mentioned in the [XGBoost documentation][2] are available. We show two examples to expand on this, but these examples are of XGBoost instead of Dask.\n",
"\n",
"[2]:https://xgboost.readthedocs.io/en/latest/python/python_intro.html#"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Plot feature importance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"\n",
"ax = xgboost.plot_importance(bst, height=0.8, max_num_features=9)\n",
"ax.grid(False, axis=\"y\")\n",
"ax.set_title('Estimated feature importance')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We specified that only 4 features were informative while creating our data, and only 3 features show up as important."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Plot the Receiver Operating Characteristic curve\n",
"We can use a fancier metric to determine how well our classifier is doing by plotting the [Receiver Operating Characteristic (ROC) curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_hat = dask_xgboost.predict(client, bst, X_test).persist()\n",
"y_hat"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import roc_curve\n",
"\n",
"y_test, y_hat = dask.compute(y_test, y_hat)\n",
"fpr, tpr, _ = roc_curve(y_test, y_hat)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import auc\n",
"\n",
"fig, ax = plt.subplots(figsize=(5, 5))\n",
"ax.plot(fpr, tpr, lw=3,\n",
" label='ROC Curve (area = {:.2f})'.format(auc(fpr, tpr)))\n",
"ax.plot([0, 1], [0, 1], 'k--', lw=2)\n",
"ax.set(\n",
" xlim=(0, 1),\n",
" ylim=(0, 1),\n",
" title=\"ROC Curve\",\n",
" xlabel=\"False Positive Rate\",\n",
" ylabel=\"True Positive Rate\",\n",
")\n",
"ax.legend();\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This Receiver Operating Characteristic (ROC) curve tells how well our classifier is doing. We can tell it's doing well by how far it bends the upper-left. A perfect classifier would be in the upper-left corner, and a random classifier would follow the diagonal line.\n",
"\n",
"The area under this curve is `area = 0.76`. This tells us the probability that our classifier will predict correctly for a randomly chosen instance."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Learn more\n",
"* Recorded screencast stepping through the real world example above:\n",
"* A blogpost on dask-xgboost http://matthewrocklin.com/blog/work/2017/03/28/dask-xgboost\n",
"* XGBoost documentation: https://xgboost.readthedocs.io/en/latest/python/python_intro.html#\n",
"* Dask-XGBoost documentation: http://ml.dask.org/xgboost.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}