Loss metric to use required_output_keys
See original GitHub issue🚀 Feature
Currently, if we have custom metrics that require data other then y_pred
and y
, we suggest to do the following:
metrics = {
"Accuracy": Accuracy(),
"Loss": Loss(criterion, output_transform=lambda out_dict: (out_dict["y_pred"], out_dict["y"])),
"CustomMetric": CustomMetric()
}
evaluator = create_supervised_evaluator(
model,
metrics=metrics,
output_transform=lambda x, y, y_pred: {"x": x, "y": y, "y_pred": y_pred}
)
where CustomMetric
is defined as
class CustomMetric(Metric):
required_output_keys = ("y_pred", "y", "x")
The idea is to extend this for Loss
metric to support required_output_keys
. The main issue with Loss
now is with (prediction, target, kwargs)
optional input, where kwargs
is a dict for extra args for criterion function.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:12 (5 by maintainers)
Top Results From Across the Web
Can I use a metric as a loss function? | Jonathan Blog
Here we are talking about loss functions that actually work for your particular algorithm. As you will see later, not all metrics can...
Read more >Why do we use loss functions to estimate a model instead of ...
It's a good question. Generally, I would argue that you should try to optimise a loss function which corresponds to the evaluation metric...
Read more >Metrics - Keras
Metrics. A metric is a function that is used to judge the performance of your model. ... Note that you may use any...
Read more >Keras Loss Functions: Everything You Need to Know
In this piece we'll look at: loss functions available in Keras and how to use them,; how you can define your own custom...
Read more >How to Choose Loss Functions When Training Deep Learning ...
We will also track the mean squared error as a metric when fitting the model so that we can use it as a...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Yes, it is clear. I will change the argument and add a demo code as well as a statement that we accept a dictionary argument to the docs. I will make a PR soon. Thank you for explaining the issue.
@01-vyom thanks for the time on studying this issue. I agree it is not clearly stated what we would like to do here. Sorry about that.
The only thing to do here is to update current implementation of
Loss
by definingrequired_output_keys = ("y_pred", "y", "criterion_kwargs")
instead of None and updating the docs saying that we can now interpret output’s keys if it is a dictionary like here : https://pytorch.org/ignite/metrics.html#ignite.metrics.Metric.required_output_keysThe main idea is too simplify the code:
And if we are in a use-case where user’s criterion requires some kwargs:
criterion(y_pred, y, **kwargs)
then our code should work almost like you suggested above:Please, let me know if it stille unclear ?