Can't add evaluation logs to a finished run
See original GitHub issueIt seems to be impossible to log to a run once it has been marked “finished”. Shouldn’t it be possible to run further evaluations on a finished training run?!
snipped from eval.py
:
run = None
if tb_log and arguments.platform == 'slurm': # only do this once for a multi-GPU job
api = wandb.Api()
runs = api.runs('cvpr_2021/pcdet')
# try to find training run on WAND (maybe does not exits)
for r in runs:
if r.name == arguments.folder.split('/')[-1]:
run = r
run.dir = output_dir
break
if run:
step = int(accumulated_iter) # accumulated_iter sometimes is a String and WAND currently can't handle that
run.log({identifier: value}, step=step)
resulting error message:
run.log({identifier: value}, step=step)
File "/home/martin/anaconda3/envs/PCDet/lib/python3.6/site-packages/wandb/apis/public.py", line 530, in __getattr__
"'{}' object has no attribute '{}'".format(repr(self), name)
AttributeError: '<Run cvpr_2021/pcdet/3g4f3tpq (finished)>' object has no attribute 'log'
Note:
I also encountered another issue:
step = int(accumulated_iter) # accumulated_iter sometimes is a String and WAND currently can't handle that
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (2 by maintainers)
Top Results From Across the Web
Your Complete ADHD Diagnosis and Testing Guide - ADDitude
A thorough ADHD diagnosis includes symptom tests and interviews, plus a medical history evaluation to rule out any other conditions.
Read more >Log file reference - Configuration Manager - Microsoft Learn
A reference of all log files for Configuration Manager client, server, and dependent components.
Read more >No console.log to STDOUT when running "npm test" (jest)
Anybody experienced a similar problem? EDIT 1: I tried running npm test -- --runInBand but the log doesn't appear.
Read more >Testing for ADHD: How ADHD Is Diagnosed - Verywell Mind
Attention-deficit hyperactivity disorder (ADHD) can't be diagnosed with a physical test, like a blood test or an X-ray.
Read more >Troubleshoot the Logging agent | Google Cloud
This page provides instructions for troubleshooting common issues found with installing or interacting with the Logging agent.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Issue-Label Bot is automatically applying the label
bug
to this issue, with a confidence of 0.74. Please mark this comment with 👍 or 👎 to give our bot feedback!Links: app homepage, dashboard and code for this bot.
Hi @lucasdavid,
You are right, artifacts can be used to store and retrieve data by one or more runs. 2 runs can be “linked” by an artifact in the sense that you have a “Training Run” which logs an artifact (like a model) and an “Evaluation Run”, which uses the same artifact for evaluation.
However, as @MartinHahner mentioned, you can also resume runs if you would like to evaluate after your training process has completed but still have all your metrics present in one run.
Thanks, Ramit