Save training and vaildation loss in `loss_curve_` in MLPClassifier and MLPRegressor with early_stopping
See original GitHub issueLet’s select MLPClassifier.
In MLPClassifier there is loss_curve_
available. If there is early_stopping
enabled then some part of the data is used as validation. Can we save the loss of training and validation data in the loss_curve_
as well?
Additional context
I’ve compared the MLP implementation with Tensorflow implementation and it works very well, there are no significant differences in the performance. You can read the comparison details at my blog post. I’m using the MLP in my AutoML mljar-supervised which creates Markdown reports for each model. I would like to have learning curves available in the report.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
plot training and validation loss curves? - Stack Overflow
The question is if this would be the correct way to plot the validation curve or if the method I followed is the...
Read more >Use Early Stopping to Halt the Training of Neural Networks At ...
Recall that early stopping is monitoring loss on the validation dataset and that the model checkpoint is saving models based on accuracy. As ......
Read more >sklearn.neural_network.MLPRegressor
Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least...
Read more >Visualize MLPClassifier
MLPClassifier loss curve. Plotting Learning Curves¶. In the first column, first row the learning curve of a naive Bayes classifier is shown for...
Read more >Tensorflow vs Scikit-learn - MLJAR
It allows constructing Machine Learning algorithms such as Neural ... the MLP for regression just change the MLPClassifier to MLPRegressor .
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I personally did not follow the advancement of this feature and if there is any blocker regarding the API. It will be probably too soon to expect it in 0.24 since we should release pretty soon but it might be a milestone for 0.25 then.
That’s a strange decision, sklearn MLP works pretty well. I did a comparison of MLP from sklearn vs Keras+TF. Sklearn MLP performs very well and was faster on CPU computations. Check the comparison here: https://mljar.com/blog/tensorflow-vs-scikit-learn/ Not all NN must be deep on computed on GPU.