Online predictions with LIME (or how to pickle an Explainer object?)
See original GitHub issueHi,
I’m considering deploying LIME in a production environment; say, the user inputs the relevant columns and some server is listening and using LIME to return LIME explanations.
I’m not sure how to go about this, but I thought the best way would be to somehow pickle
the Explainer
object, that then could be read on demand and then feed it with the user input. Would something like that work? Not sure if it would still require LIME and sci-kit learn to be installed and loaded each time.
Issue Analytics
- State:
- Created 6 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
Explain your model predictions with LIME - Kaggle
LIME focuses on training local surrogate models to explain individual predictions. Local surrogate models are interpretable models that are used to explain ...
Read more >Decrypting your Machine Learning model using LIME
LIME ( Local Interpretable Model-agnostic Explanations )is a novel explanation technique that explains the prediction of any classifier in ...
Read more >How to make machine learning models interpretable
To initialize an explainer object, you need to pass your model and some training data to the explainer's constructor.
Read more >Just apply LIME to explain, trust and validate your predictions ...
This hands on tutorial with 2 Jupyter notebooks explain how you can use LIME (local interpretable model-agnostic explanations) in your ...
Read more >Introduction to Explainable AI(XAI) using LIME - GeeksforGeeks
LIME also employs a Ridge Regression model on the samples using only the obtained features. The outputted prediction should theoretically be ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
No, no reason. I think the only place we store lambda functions is here, and they could easily be removed from there.
Dill is working well, for me in the context of the OP’s question.
To save:
To load:
with open(explainer_filename, 'rb') as f: explainer = dill.load(f)