question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Interpret explainer module question

See original GitHub issue

I ran a forecasting experiment in Azure ML and it chose SeasonalAverage algorithm as the best model. In the explanation, only the target column ‘WeeklySales’ had importance and none of the other input columns (CustomerType, CustomerClass) seem to have any importance. I wanted to force the model to use other input columns. I couldn’t find a way force it in my python script, so I opened a MS support case. They came back and mentioned there was no way to force the model to use the other input columns. I could instead use the interpret package explainer to understand as to why. I tried to use the TabularExplainer, MimicExplainer and PFIExplainer modules, but I get the error “For a forecasting model, predict is not supported. Please use forecast instead.”. I do use forecasting task in the automl_config. MS support suggested that I get in touch with the developers and they also suggested that the explainer modules might not have been designed for time series.

I would appreciate your suggestion.

Thank you Kannan

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:7

github_iconTop GitHub Comments

1reaction
kannanthiru1commented, Apr 7, 2022

Hello IIya

In the attached script ‘SKU Forecasting 12’,

  1. for Tabular Explainer I am getting the error in line 155
  2. for Mimic Explainer, the line is 172
  3. for PFI explainer, the line is 185

I would appreciate your help. Kannan SKU Forecasting 12.txt

0reactions
kannanthiru1commented, Apr 27, 2022

Thank you Ilya for following up. I deeply appreciate it. Kannan

Read more comments on GitHub >

github_iconTop Results From Across the Web

Model interpretability - Azure Machine Learning
Learn how your machine learning model makes predictions during training ... Interpretability helps answer questions in scenarios such as:.
Read more >
About mimic explainer: · Issue #311 - GitHub
My question is, is it appropriate to use Mimic's LightGBM to explain my LightGBM model?(does it make sense?) Or another way to put...
Read more >
Local Model Interpretation: An Introduction - Gilbert Tanner
Local model interpretation is a set of techniques aimed at answering questions like: Why did the model make this specific prediction? What effect...
Read more >
ML Model Interpretation Tools: What, Why, and How to Interpret
Already, we can ask a couple of follow-up questions: How trustworthy are these predictions? Are they reliable enough to make big decisions? Model...
Read more >
How to interpret and explain your machine learning models ...
We explain what SHAP values are, walk you through a real life example, and outline how you can use them to interpret &...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found