解释VGG16在ImageNet上的中间层
解释预测结果相对于原始输入图像比解释预测结果相对于更高卷积层更困难(因为更高卷积层更接近输出)。本笔记本提供了一个简单示例,展示如何使用 GradientExplainer 来解释预训练 VGG16 网络第7层相对于模型输出的情况。
请注意,默认情况下会抽取200个样本来计算期望值。为了运行更快,您可以减少每次解释的样本数量。
[1]:
import json
import keras.backend as K
import numpy as np
from keras.applications.vgg16 import VGG16, preprocess_input
import shap
# load pre-trained model and choose two images to explain
model = VGG16(weights="imagenet", include_top=True)
X, y = shap.datasets.imagenet50()
to_explain = X[[39, 41]]
# load the ImageNet class names
url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json"
fname = shap.datasets.cache(url)
with open(fname) as f:
class_names = json.load(f)
# explain how the input to the 7th layer of the model explains the top two classes
def map2layer(x, layer):
feed_dict = dict(zip([model.layers[0].input], [preprocess_input(x.copy())]))
return K.get_session().run(model.layers[layer].input, feed_dict)
e = shap.GradientExplainer(
(model.layers[7].input, model.layers[-1].output),
map2layer(preprocess_input(X.copy()), 7),
)
shap_values, indexes = e.shap_values(map2layer(to_explain, 7), ranked_outputs=2)
# get the names for the classes
index_names = np.vectorize(lambda x: class_names[str(x)][1])(indexes)
# plot the explanations
shap.image_plot(shap_values, to_explain, index_names)
Using TensorFlow backend.
用局部平滑解释
梯度解释器使用预期梯度,它将积分梯度、SHAP 和 SmoothGrad 的思想融合到一个单一的期望方程中。要像 SmoothGrad 那样使用平滑,只需将 local_smoothing 参数设置为非零值。这将在期望计算期间向输入添加具有该标准偏差的正态分布噪声。它可以创建更平滑的特征归属,更好地捕捉图像中的相关区域。
[2]:
# explain how the input to the 7th layer of the model explains the top two classes
explainer = shap.GradientExplainer(
(model.layers[7].input, model.layers[-1].output),
map2layer(preprocess_input(X.copy()), 7),
local_smoothing=100,
)
shap_values, indexes = explainer.shap_values(map2layer(to_explain, 7), ranked_outputs=2)
# get the names for the classes
index_names = np.vectorize(lambda x: class_names[str(x)][1])(indexes)
# plot the explanations
shap.image_plot(shap_values, to_explain, index_names)