Is it possible to do incremental training on LimeTabularExplainer?
See original GitHub issueHi, I have a data, I fit a model, store the model, later I get new data, I don’t want to retrain with full data, so I fit the new data, may I know is it possible to create explainer as a incremental fit for the new data.
data = [[1, 2], [0.5, 6], [0, 10], [1, 18]]
scaler = MinMaxScaler()
scaler.partial_fit(data)
sc_data = scaler.transform(data)
model1 = IForest(contamination=0.1).fit(sc_data)
explainer = lime.lime_tabular.LimeTabularExplainer(sc_data,
mode='classification',
feature_names=feature_names,
kernel_width=5,
random_state=42,
discretize_continuous=False)
I store the model, scaler, explainer for serving purpose, after some time i get more data, so fit the new data to the same model, is it possible for explainer as well?
data2 = [[15, 12], [15, 16], [0, 11], [1, 18]]
scaler = load(scaler)
loaded_model = load(model1)
scaler.partial_fit(data2)
sc_data2 = scaler.transform(data2)
model2 = loaded_model.fit(sc_data2)
explainer = lime.lime_tabular.LimeTabularExplainer(????????????????)
Thanks in advance for the inputs.
Issue Analytics
- State:
- Created 3 years ago
- Comments:12 (5 by maintainers)
Top Results From Across the Web
Is incremental learning possible with Tensorflow?
I'm trying to train a Tensorflow model with a very large dataset (much larger than my memory). To fully utilize all the available...
Read more >Incremental training of Neural Networks - Cross Validated
I would suggest you to use Transfer Learning Techniques. Basically, it transfers the knowledge in your big and old dataset to your fresh...
Read more >How to improve your machine learning models by ...
by Déborah Mesquita How to improve your machine learning models by explaining predictions with LIME Increase users' trust and find bugs ...
Read more >Incremental Training in Amazon SageMaker
Use incremental training in Amazon SageMaker to train variants of a model, resume a stopped model, or retrain a mode to improve its...
Read more >How to Use LIME to Interpret Predictions of ML Models?
We can have machine learning models that give more than 95% ... Below we have created a LimeTabularExplainer object based on the training...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
You will have to have a separate object keeping track of the running averages for feature frequencies. I assume you’re not updating the discretizer every time, so everything else should stay the same. What you can do is apply the discretizer from the original explainer on the new data and update the frequencies.
You can use the
training_data_stats
parameter (description here). Best,