question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Mode for key_metric

See original GitHub issue

Is your feature request related to a problem? Please describe. I think the Workflow or CheckpointSaver need a mode option for key_metric and additional_metrics to handle both metrics like acc and mse.

Describe the solution you’d like Here is the keras version:

if mode == 'min':
      self.monitor_op = np.less
      self.best = np.Inf
elif mode == 'max':
      self.monitor_op = np.greater
      self.best = -np.Inf
else: #auto mode
      if 'acc' in self.monitor or self.monitor.startswith('fmeasure'):
        self.monitor_op = np.greater
        self.best = -np.Inf
      else:
        self.monitor_op = np.less
        self.best = np.Inf

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
vfdev-5commented, Oct 20, 2020

Hi @Nic-Ma here is an example of usage of ignite’s Checkpoint with score_function to adapt mse as “increasing score” :

from ignite.engine import Engine, Events
from ignite.handlers import Checkpoint, DiskSaver, global_step_from_engine

trainer = ...
evaluator = ...

def score_function(engine):
    return -1.0 * engine.state.metrics['mse']

to_save = {'model': model}
handler = Checkpoint(
    to_save, 
    DiskSaver('/tmp/models', create_dir=True), 
    n_saved=2,
    filename_prefix='best', 
    score_function=score_function, 
    score_name="neg_val_mse",
    global_step_transform=global_step_from_engine(trainer)
)
evaluator.add_event_handler(Events.COMPLETED, handler)

> ["best_model_9_neg_val_mse=-0.15.pt", "best_model_10_neg_val_mse=-0.08.pt", ]

HTH

1reaction
vfdev-5commented, Oct 17, 2020

As far as I understand, the feature request is to provide 2 + “auto mode” ways to compare metrics to decide what to save. In ignite we assume to save best score as larger score and for metrics like MSE (where better is lower vs Accuracy where better is greater) we ask user to provide inverse of it such that the best model has larger score.

I think auto magic mode it is better to avoid as could lead to unexpected errors or user misunderstanding of the usage.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Cluster Mode - PM2
The cluster mode allows networked Node.js applications (http(s)/tcp/udp server) to be scaled across all CPUs available, without any code modifications.
Read more >
KeyMetric Call Tracking Reviews & Product Details - G2
The Keymetric dashboard and reports are super easy to use. In addition to the call analytics, we also track all our website engagements...
Read more >
Key metric summary | Adobe Analytics - Experience League
The Key metric summary visualization lets you see how an important metric is trending within a single timeframe.
Read more >
Investigating a Drop in User Engagement - Mode Analytics
In many cases, these problems surface through key metric dashboards that execs and managers check daily. The problem. You show up to work...
Read more >
KeyMetric: Call Tracking & Call Analytics for Marketers ...
Instantly know which advertising campaigns & keywords are making your phone ring. Call (855) 563-8742.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found