question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Transforming metric values

See original GitHub issue

❓ Questions/Help/Support

The document of Metircs says that return value of compute can be Any. So I’m trying to do this in a single pass evaluation over the whole validation data loader.



class SuperMetrics(Metric):

  def __init__(self, num_labels, output_transform=lambda x: x, device=None):
      self.num_labels = num_labels
      self._y = None
      self._y_pred = None
      self._num_drugs = None
      super(SuperMetrics, self).__init__(output_transform=output_transform,
                                         device=device)

  def compute_metrics(self, y, y_pred):  # pylint: disable=no-self-use
      y = y.toarray()
      y_pred = y_pred.toarray()
      y[y > 0] = 1
      y_pred[y_pred > 0] = 1

      hamming_loss = metrics.hamming_loss(y, y_pred)
      macro_f1 = metrics.f1_score(y, y_pred, average='macro')
      macro_precision = metrics.precision_score(y, y_pred, average='macro')
      macro_recall = metrics.recall_score(y, y_pred, average='macro')
      micro_f1 = metrics.f1_score(y, y_pred, average='micro')
      micro_precision = metrics.precision_score(y, y_pred, average='micro')
      micro_recall = metrics.recall_score(y, y_pred, average='micro')

      return {
          'hamming_loss': hamming_loss,
          'macro_f1': macro_f1,
          'macro_precision': macro_precision,
          'macro_recall': macro_recall,
          'micro_f1': micro_f1,
          'micro_precision': micro_precision,
          'micro_recall': micro_recall
      }

  @reinit__is_reduced
  def reset(self):
      self._y = []
      self._y_pred = []
      self._num_drugs = []
      super(SuperMetrics, self).reset()

  @reinit__is_reduced
  def update(self, output):
      y, y_pred, num_drugs = output
      self._y += y
      self._y_pred += y_pred
      self._num_drugs += num_drugs

  @sync_all_reduce('_y', '_y_pred', '_num_drugs')
  def compute(self):
      num_examples = len(self._num_drugs)

      rows = []
      y_columns, y_pred_columns = [], []
      for i, (y_sample, y_pred_sample, num_drug_sample) in enumerate(
              zip(self._y, self._y_pred, self._num_drugs)):
          rows += [i] * (num_drug_sample - 2)
          y_columns += y_sample[1:1 + num_drug_sample - 2]
          y_pred_columns += y_pred_sample[:num_drug_sample - 2]
      values = [1] * len(rows)
      y = coo_matrix((values, (rows, y_columns)),
                     shape=(num_examples, self.num_labels))
      y_pred = coo_matrix((values, (rows, y_pred_columns)),
                          shape=(num_examples, self.num_labels))

      return self.compute_metrics(y, y_pred)

The problem is, if this metric is attached to the evaluator by passing metrics={'super': SuperMetrics(vocab_size))}, I will get a nested metric value of engine.state.metrics. This is fine if I only print it to the terminal though. But I can not figure out a way to make it work with NeptuneLogger.

neptune_logger.attach(
    evaluator,
    log_handler=OutputHandler(tag='val',
                              metric_names='all'),
    event_name=Events.EPOCH_COMPLETED(every=params['eval_freq']))

Is there a safe way and place to flatten engine.state.metrics? Or should I do this? Is there any advice to compute all this metrics once using ignite? Thanks!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:13 (10 by maintainers)

github_iconTop GitHub Comments

2reactions
liebknecommented, Apr 22, 2020

@vfdev-5 Thanks for correcting me about sync_all_reduce. I just copied this piece of code from the document and have not completely dig into it …😅 I switch from lightning to ignite yesterday and quite happy with ignite! About the memory issue, I also read from the document about the multi-label case before I implemented this ‘SuperMetric’, I can’t find a better way right now…

1reaction
liebknecommented, Apr 23, 2020

😅It took me some time to learn how to write tests… I’ve submitted a PR of this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

7.3: Transformation of the Metric - Physics LibreTexts
The procedure employed above works in general. To transform the metric from coordinates (t,x,y,z) to new coordinates (t′,x′,y′,z′), we obtain ...
Read more >
Conversion between metric units (video) | Khan Academy
There are actually many units lower than a millimeter, here are some of them. micrometer ... Is there a video on converting with...
Read more >
MetricFlow: Transform Metrics Framework - Blog
The Transform Metrics Framework provides a centralized source of truth ... Measures are quantitative, numeric values of something that you ...
Read more >
Creating transformation metrics - MicroStrategy
Metrics use time transformations to compare values at different times, such as this year versus last year or current date versus month-to-date. The...
Read more >
Transformation metrics: Defining success | McKinsey
Research by our colleagues shows that the most successful performance-transformation efforts cut across business units and functions, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found