question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

LimeTextExplainer returns a lot zeros only explanations

See original GitHub issue

OS: Manjaro x64 Python: 3.6.11 Lime: 0.1.1.32

I am using LimeTextExplainer to explain instances of different models (Logistic Regression, SVM, XGBoost, etc.). Explaining round about 800 instances always results in 300-400 explanations that contain zeros only.

For example: Instance: [‘This’, ‘ball’, ‘is’, ‘red’, ‘and’, ‘gold’] Explanation: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

I thought it might be the length of the data instance but it’s not. There is no pattern on first view. I have very short instances with “real” explanations and very short with only zeros. Same for long ones.

Code:

explainer = LimeTextExplainer()
exp = explainer.explain_instance(text, clf.predict, num_features=len(text.split()), num_samples=1000)

Any idea what I miss?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
marcotcrcommented, Oct 9, 2020

Since you’re using default parameters, LIME will remove words at random. Is it the case that the prediction of your model is constant for all variations of ‘This ball is red and gold’? i.e. what output do you get if you run the following?

clf.predict(['This ball', 'This', 'This ball is red', 'This ball is red gold', 'This ball is red and gold'])

I suggest using predict_proba to get a bit of signal from the prediction probability

0reactions
marcotcrcommented, Dec 18, 2020

It could be that your model is not changing prediction probabilities much around those perturbations and / or that the explanation is just the intercept (see #304, for example).

Read more comments on GitHub >

github_iconTop Results From Across the Web

lime package — lime 0.1 documentation
Returns the explanation as an html page. Parameters: labels – desired labels to show explanations for (as barcharts). If you ask for a ......
Read more >
Explaining text classification predictions with LIME
This post looks at a particular technique called LIME, that aims to make individual predictions interpretable. The algorithm does so by fitting ...
Read more >
Interpreting Machine Learning Models Using LIME and SHAP
As shown in the table, removing the word “lazy” decreased the predicted negative score to zero, while all other perturbations had a negligible ......
Read more >
Chapter 4. Model Explainability and Interpretability
Some models are easy to explain because their individual parameters correspond to decision points that humans can easily understand, such as linear and...
Read more >
Understanding lime
The simple model can then be used to explain the predictions of the ... as this is the only thing establishing the locality...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found