question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Logging metrics at different rates drops metrics / has inconsistent results.

See original GitHub issue

wandb, version 0.9.4 Python 3.7.8 Linux

Description

As far as I can tell it is not possible to log metrics at different rates. My use case is as follows (but I’m sure you can contrive of a simpler use case):

  • Generate metrics at every epoch
  • At every % K epochs request an async calculation of a time consuming metric (eg: Frechet Inception Distance such as here: https://github.com/jramapuram/async_fid)
  • The async callback pushes metrics after current epoch has passed but the data is never pushed.
test-0[Epoch 41][1600 samples][4.59 sec]:  Loss: 63.7088 -ELBO: 63.7088 NLL: 67.5594 KLD: -3.8506
wandb: WARNING Adding to old History rows isn't currently supported.  Step 20 < 42; dropping {'train_precision': 0.2194}.
wandb: WARNING Adding to old History rows isn't currently supported.  Step 20 < 42; dropping {'train_recall': 0.0014906832}.
wandb: WARNING Adding to old History rows isn't currently supported.  Step 20 < 42; dropping {'train_density': 0.06182}.
wandb: WARNING Adding to old History rows isn't currently supported.  Step 20 < 42; dropping {'train_coverage': 0.028074535}.
wandb: WARNING Adding to old History rows isn't currently supported.  Step 20 < 42; dropping {'train_fid': 92.375404}.
train-0[Epoch 42][4864 samples][58.90 sec]:  Loss: 56.6939 -ELBO: 56.6939 NLL: 60.5773 KLD: -3.8834
test-0[Epoch 42][1600 samples][4.53 sec]:  Loss: 63.7523 -ELBO: 63.7523 NLL: 67.6258 KLD: -3.8735

Is there a workaround to this?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:13 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
lukascommented, Aug 17, 2020

Yes - our system is designed to make this work

0reactions
sydhollcommented, Jan 6, 2022

n the past year we’ve majorly reworked the CLI and UI for Weights & Biases. We’re closing issues older than 6 months. Please comment to reopen.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Logging metrics at different rates drops metrics / has ... - GitHub
As far as I can tell it is not possible to log metrics at different rates. My use case is as follows (but...
Read more >
Troubleshoot log-based metrics - Google Cloud
This page provides troubleshooting information for common scenarios when using log-based metrics in Cloud Logging.
Read more >
Amazon CloudWatch FAQs - Amazon Web Services (AWS)
Logs Insights is fully integrated with CloudWatch, enabling you to manage, explore, and analyze your logs. You can also leverage CloudWatch Metrics, Alarms...
Read more >
Metrics Data Model | OpenTelemetry
The aggregation temporality is used to understand the context in which the sum was calculated. When the aggregation temporality is “delta”, we expect...
Read more >
You are the PM for a streaming video service. You come into ...
The issue could also be that the metrics we are grabbing is incorrect. external factors include: a new competitor has joined into the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found