question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Online predictions with LIME (or how to pickle an Explainer object?)

See original GitHub issue

Hi,

I’m considering deploying LIME in a production environment; say, the user inputs the relevant columns and some server is listening and using LIME to return LIME explanations.

I’m not sure how to go about this, but I thought the best way would be to somehow pickle the Explainer object, that then could be read on demand and then feed it with the user input. Would something like that work? Not sure if it would still require LIME and sci-kit learn to be installed and loaded each time.

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:9 (5 by maintainers)

github_iconTop GitHub Comments

3reactions
marcotcrcommented, Jan 19, 2018

No, no reason. I think the only place we store lambda functions is here, and they could easily be removed from there.

1reaction
gregrosscommented, Apr 11, 2019

Dill is working well, for me in the context of the OP’s question.

To save:

import dill
with open(explainer_filename, 'wb') as f: dill.dump(explainer, f)

To load: with open(explainer_filename, 'rb') as f: explainer = dill.load(f)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Explain your model predictions with LIME - Kaggle
LIME focuses on training local surrogate models to explain individual predictions. Local surrogate models are interpretable models that are used to explain ...
Read more >
Decrypting your Machine Learning model using LIME
LIME ( Local Interpretable Model-agnostic Explanations )is a novel explanation technique that explains the prediction of any classifier in ...
Read more >
How to make machine learning models interpretable
To initialize an explainer object, you need to pass your model and some training data to the explainer's constructor.
Read more >
Just apply LIME to explain, trust and validate your predictions ...
This hands on tutorial with 2 Jupyter notebooks explain how you can use LIME (local interpretable model-agnostic explanations) in your ...
Read more >
Introduction to Explainable AI(XAI) using LIME - GeeksforGeeks
LIME also employs a Ridge Regression model on the samples using only the obtained features. The outputted prediction should theoretically be ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found