![]() ![]() Possibly the most intepretable model - and therefore the one we will use as inspiration - is a regression. Using gradients to interpret neural networks In this post, I will cover the intuition behind using these gradients, as well as two specific techniques that have come out of this: Integrated Gradients and DeepLift. The most common approach so far has been to consider the gradients of the inputs with respect to the predictions. not seeing a feature), neural networks can’t, so a slightly different approach will be needed to interpret them. Unfortunately, while certain machine learning algorithms (such as XGBoost) can handle null feature values (i.e. Secondly, the sum of the feature importances (the red and blue arrows) is the difference between this baseline and the model’s actual prediction for Bob. There are two things to note from this ice cream model: firstly, the effect of the features is being compared to a baseline of what the model would predict when it can’t see the features. Inspired by the diagrams in the shap values paper ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |