Explaining regression for images
See original GitHub issueHello,
tl;dr I would like to use LIME for explaining regression for images.
I’m doing a self-driving car project where my model is predicting the angle of the wheels needed for the car to take a turn (for each frame independently, no history taken into account). Every once in a while, the model decides to inexplicably get off the road, and I asked myself: “Can LIME help me understand what made it do that?”
In your “Roadmap” section of the contributing guidelines, point 4., I found:
Thinking through what is needed to use LIME in regression problems. An obvious problem is that features with different scales make it really hard to interpret.
On the one hand, I’ve noticed there’s a “Using lime for regression” notebook (for LimeTabularExplainer
), but on the other, the LimeImageExplainer
seems to work with classification only (at least judging by the API, I haven’t gone through the details yet).
How big a problem do you think this is? What would need to be done?
Issue Analytics
- State:
- Created 6 years ago
- Reactions:1
- Comments:8 (2 by maintainers)
Top GitHub Comments
I had the same doubt and what I found was that it does work directly by passing the regressor, just make sure use explanation.top_labels[0] as there is only one label. Though as mentioned in this thread, the explanation might not be as meaningful as in the case of classification
What do you mean by passing a regressor?
I just predict facial keypoints for images and i try this:
my model has the input shape (96,96,1).
explanation = explainer.explain_instance(faces[0], model.predict, top_labels=5, hide_color=0, num_samples=1000)
Would this work with “model.predict” or i have to pass somthing else?