question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[metrics] Automatic reduction of metrics from several validation steps

See original GitHub issue

🚀 Feature

As per the slack, it could be cool to implement this. More detail below.

Motivation

To avoid the user having to do this

logits = torch.cat(x['logits'] for x in output)
labels = torch.cat(x['labels'] for x in output) 
and so on ...

Pitch

Something like this:

    def collate_metrics(self, output):
        """
        Function to collate the output from several validation steps
        """
        collated_output = {}
        keys = output[0].keys()
        for key in keys:
            tensor_dim = output[0][key].dim()
            if tensor_dim > 0:
                collated_output[key] = torch.cat([x[key] for x in output])
            elif tensor_dim == 0:
                # Reduce scalars by mean
                collated_output[key] = torch.tensor([x[key] for x in output]).mean()
        return collated_output

Alternatives

Can just add the above to lightning module and use it anyway.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
SkafteNickicommented, Sep 16, 2020

With PR #3245 merge, this is solved now. Each metric now have a aggregated property that contains the aggregated metric value of data seen so far. In practice you can use it like this in lightning:

def validation_step(self, batch, batch_idx):
    x, y = batch
    ypred = self(x)
    loss = self.loss_fn(ypred, y)
    val = self.metric(ypred, y)
    return loss # no need to return the value of the metric

def validation_epoch_end(self, validation_step_outputs):
    aggregated_metric = self.metric.aggregated
    return aggregated_metric

Closing this issue.

1reaction
Bordacommented, Mar 26, 2020

I guess that if add metrics as a class discussed in #973 we may define for each custom reduction method, right?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Metrics and validation - Amazon SageMaker
This guide shows metrics and validation techniques that you can use to measure machine learning model performance. Amazon SageMaker Autopilot produces metrics ......
Read more >
Metric Aggregation · Issue #3245 · Lightning-AI ... - GitHub
Feature To offer a better metric aggregation I discussed a ... [metrics] Automatic reduction of metrics from several validation steps #1249.
Read more >
Structure Overview — PyTorch-Metrics 0.11.0 documentation
Metrics contain internal states that keep track of the data seen so far. Do not mix metric states across training, validation and testing....
Read more >
Metrics - Keras
A metric is a function that is used to judge the performance of your model. Metric functions are similar to loss functions, except...
Read more >
A Metric-Based Validation Process to Assess the Realism of ...
This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found