question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

BUG in tools/train.py and Confusion about the method of calculating ACC

See original GitHub issue

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. The bug has not been fixed in the latest version.

Describe the bug Hi, dear authors I found that when using multiple GPUs for training (dis_train), no matter how much gpu_nums I input, the gpu_ids in the config is always equal to range (0, 1), and only 1 GPU worked, but mmdet’s dis_train does not have this problem.

Reproduction

  1. What command or script did you run?

    bash tools/dist_train.sh configs/mine/new_deeplabv3plus.py 2 --work-dir my_work_mmseg

  2. Did you make any modifications on the code or config? Did you understand what you have modified? Yes

  3. What dataset did you use? VOC format

Bug fix It should be solved by adding these three lines in the picture after line 96 of the train.py 2021-03-16 15-50-30屏幕截图 This is exactly the difference between train.py in mmseg and train.py in mmdet.

Confusion about the method of calculating Acc I have seen the way to calculate acc. I’m curious, isn’t the way of calculating acc in mmseg is calculating recall?(mmsegmentation/mmseg/core/evaluation/metrics.py) 屏幕截图 2021-03-16 160857

total_area_label = Ground Truth = TP + FN total_area_intersect = TP total_area_pred_label = TP + FP total_area_union = TP + FN + FP

Recall = TP / (TP + FN), but Precision = TP / (TP + FP) Should it be changed to acc = total_area_intersect / total_area_pred_label ? v2-55e6f657baf2b9a8fd1827f460e1e7b6_hd

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
lorinczszabolcscommented, Jul 11, 2022

Hi. acc here is indeed recall.

As for the GPU issue, would you mind create a PR to fix it?

Hi! Is this the intented / correct way to calculate it? In my mind I think accuracy shouldn’t be equal to recall, but I might be wrong. Was this corrected in a subsequent PR, or did I miss it?

0reactions
zhongqiu1245commented, Apr 9, 2021

thank you

Read more comments on GitHub >

github_iconTop Results From Across the Web

There are still bugs in the scripts used to run validation in train ...
It is not clear what is meant by this - the training script does not appear to do this automatically, which is why...
Read more >
Release 0.29.1 MMSegmentation Authors
test a single image and show the results img = 'test.jpg' # or img = mmcv.imread(img), which will only load it once result...
Read more >
mmdet Changelog - pyup.io
Fix bug in mean_ap.py when calculating mAP by 11 points (4875) - Fix error when key `meta` is not in old checkpoints (4936)...
Read more >
Machine Learning for Source Code Vulnerability Detection
We review machine learning approaches for detecting (and correcting) vulnerabilities in source code, finding that the biggest challenges ahead involve ...
Read more >
numpy download Reviews & Product Details - G2
NumPy is the fundamental package for scientific computing with Python. ... or a complex deep neural network with the same tools, train it,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found