question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Loss metric to use required_output_keys

See original GitHub issue

🚀 Feature

Currently, if we have custom metrics that require data other then y_pred and y, we suggest to do the following:

metrics = {
    "Accuracy": Accuracy(),
    "Loss": Loss(criterion, output_transform=lambda out_dict: (out_dict["y_pred"], out_dict["y"])),
    "CustomMetric": CustomMetric()
}

evaluator = create_supervised_evaluator(
    model, 
    metrics=metrics, 
    output_transform=lambda x, y, y_pred: {"x": x, "y": y, "y_pred": y_pred}
)

where CustomMetric is defined as

class CustomMetric(Metric):

    required_output_keys = ("y_pred", "y", "x")

The idea is to extend this for Loss metric to support required_output_keys. The main issue with Loss now is with (prediction, target, kwargs) optional input, where kwargs is a dict for extra args for criterion function.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:12 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
01-vyomcommented, Jun 2, 2021

Yes, it is clear. I will change the argument and add a demo code as well as a statement that we accept a dictionary argument to the docs. I will make a PR soon. Thank you for explaining the issue.

0reactions
vfdev-5commented, Jun 2, 2021

@01-vyom thanks for the time on studying this issue. I agree it is not clearly stated what we would like to do here. Sorry about that.

The only thing to do here is to update current implementation of Loss by defining required_output_keys = ("y_pred", "y", "criterion_kwargs") instead of None and updating the docs saying that we can now interpret output’s keys if it is a dictionary like here : https://pytorch.org/ignite/metrics.html#ignite.metrics.Metric.required_output_keys

The main idea is too simplify the code:

  • BEFORE
metrics = {
    "Accuracy": Accuracy(),
    "Loss": Loss(criterion, output_transform=lambda out_dict: (out_dict["y_pred"], out_dict["y"])),
    "CustomMetric": CustomMetric()
}

evaluator = create_supervised_evaluator(
    model, 
    metrics=metrics, 
    output_transform=lambda x, y, y_pred: {"x": x, "y": y, "y_pred": y_pred}
)
  • AFTER
metrics = {
    "Accuracy": Accuracy(),
    "Loss": Loss(criterion),
    "CustomMetric": CustomMetric()
}

evaluator = create_supervised_evaluator(
    model, 
    metrics=metrics, 
    output_transform=lambda x, y, y_pred: {"x": x, "y": y, "y_pred": y_pred}
)

And if we are in a use-case where user’s criterion requires some kwargs: criterion(y_pred, y, **kwargs) then our code should work almost like you suggested above:

metrics = {
    "Accuracy": Accuracy(),
    "Loss": Loss(criterion),
    "CustomMetric": CustomMetric()
}

# global criterion kwargs
criterion_kwargs = {...}

evaluator = create_supervised_evaluator(
    model, 
    metrics=metrics, 
    output_transform=lambda x, y, y_pred: {"x": x, "y": y, "y_pred": y_pred, "criterion_kwargs": criterion_kwargs}
)

Please, let me know if it stille unclear ?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Can I use a metric as a loss function? | Jonathan Blog
Here we are talking about loss functions that actually work for your particular algorithm. As you will see later, not all metrics can...
Read more >
Why do we use loss functions to estimate a model instead of ...
It's a good question. Generally, I would argue that you should try to optimise a loss function which corresponds to the evaluation metric...
Read more >
Metrics - Keras
Metrics. A metric is a function that is used to judge the performance of your model. ... Note that you may use any...
Read more >
Keras Loss Functions: Everything You Need to Know
In this piece we'll look at: loss functions available in Keras and how to use them,; how you can define your own custom...
Read more >
How to Choose Loss Functions When Training Deep Learning ...
We will also track the mean squared error as a metric when fitting the model so that we can use it as a...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found