question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Implementing a Model Evaluation Function for GANs/WGANs

See original GitHub issue

Hi everyone! In a recent issue, I discovered that Deepchem seems to lack native support for evaluating GANs. I wonder if there are steps that we can take so that functions like .evaluate() are present for GANs.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
ncfreycommented, Nov 23, 2020

Hey @alat-rights, currently you can see here that to evaluate a normalizing flow, the “negative log likelihood” is used. Roughly speaking, the model tries to maximize the likelihood that generated samples were drawn from the target distribution.

Other specific metrics for evaluating generative models are discussed in detail in the GuacaMol paper. It would be really helpful to have some of these evaluation metrics (like KL divergence) available, or even to integrate GuacaMol into DeepChem so that any generative DeepChem model can be benchmarked.

0reactions
alat-rightscommented, Nov 23, 2020

Just wanted to check: what exact value-add will this have over what @ncfrey demonstrated in the normalizing flow tutorials?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Model Evaluation in Scikit-learn - Towards Data Science
Model Evaluation permits us to evaluate the performance of a model, and compare different models, to choose the best one to send into...
Read more >
Keras - Model Evaluation and Model Prediction - Tutorialspoint
Evaluation is a process during development of the model to check whether the model is best fit for the given problem and corresponding...
Read more >
How to Evaluate Models Using MLflow - The Databricks Blog
To evaluate a model against custom metrics, we simply pass a list of custom metric functions to the mlflow.evaluate API. Function Definition ...
Read more >
The Ultimate Guide to Evaluation and Selection of Models in ...
Important Machine Learning model trade-offs. What are model selection and model evaluation? Model evaluation is a method of assessing the correctness of models...
Read more >
Model Evaluation Metrics in Machine Learning - Medium
Machine learning has become very popular nowadays. We use machine learning to make inferences about new situations using old data, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found