Implementing a Model Evaluation Function for GANs/WGANs
See original GitHub issueHi everyone! In a recent issue, I discovered that Deepchem seems to lack native support for evaluating GANs. I wonder if there are steps that we can take so that functions like .evaluate()
are present for GANs.
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (7 by maintainers)
Top Results From Across the Web
Model Evaluation in Scikit-learn - Towards Data Science
Model Evaluation permits us to evaluate the performance of a model, and compare different models, to choose the best one to send into...
Read more >Keras - Model Evaluation and Model Prediction - Tutorialspoint
Evaluation is a process during development of the model to check whether the model is best fit for the given problem and corresponding...
Read more >How to Evaluate Models Using MLflow - The Databricks Blog
To evaluate a model against custom metrics, we simply pass a list of custom metric functions to the mlflow.evaluate API. Function Definition ...
Read more >The Ultimate Guide to Evaluation and Selection of Models in ...
Important Machine Learning model trade-offs. What are model selection and model evaluation? Model evaluation is a method of assessing the correctness of models...
Read more >Model Evaluation Metrics in Machine Learning - Medium
Machine learning has become very popular nowadays. We use machine learning to make inferences about new situations using old data, ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hey @alat-rights, currently you can see here that to evaluate a normalizing flow, the “negative log likelihood” is used. Roughly speaking, the model tries to maximize the likelihood that generated samples were drawn from the target distribution.
Other specific metrics for evaluating generative models are discussed in detail in the GuacaMol paper. It would be really helpful to have some of these evaluation metrics (like KL divergence) available, or even to integrate GuacaMol into DeepChem so that any generative DeepChem model can be benchmarked.
Just wanted to check: what exact value-add will this have over what @ncfrey demonstrated in the normalizing flow tutorials?