question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

clarification on norm loss calculation; possible bug?

See original GitHub issue

when i look at the image at https://github.com/JianqiangWan/Super-BPD/blob/master/post_process/2009_004607.png

shown here image

the norm_pred seems to decrease to blue (< 0.5) in the center of the cat’s face (farther from the boundary). this also happen for all midpoints from the boundary of the cat. this is extremely different than the norm_gt

when I look at the code in

https://github.com/JianqiangWan/Super-BPD/blob/master/vis_flux.py#L45

that seems like the correct calculation for the norm

I’ve run this on a few other examples

image

and a similar thing seems to happen.

this led me to go investigate the implementation of the loss

If I’m understanding the loss as defined in the paper

image

that means norm_loss should be pred_flux - gt_flux like in https://github.com/JianqiangWan/Super-BPD/blob/master/train.py#L42

norm_loss = weight_matrix * (pred_flux - gt_flux)**2

however, this happens after https://github.com/JianqiangWan/Super-BPD/blob/master/train.py#L39. which, I believe, is incorrect

I believe that L39 needs to happen after L42. otherwise, the norm_loss as-is is actually training the norm values to be angle values.

This makes sense as if we look at the norm_pred outputs, they look more similar to the norm_angle outputs than they should be.

HOWEVER, I could be completely misunderstanding the norm_loss term, so please let me know if I am! 🤞

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
JianqiangWancommented, Jun 29, 2020

We define gt flux at each pixel as a two-dimensional unit vector pointing from its nearest boundary to the pixel. So gt flux around medial points have nearly opposite directions. It is difficult for neural networks to learn such sharp changes, and the network is more inclined to get a smooth transition (like from -1 to 1, network tend to output -1 -0.5 0 0.5 1).

For the norm loss, gt flux is a two-dimensional unit vector field, pred flux does not to be normalized. For the angle loss, normalize pred flux inside or outside of torch.acos is the same. image

0reactions
JianqiangWancommented, Jun 29, 2020

We need two channels (x, y) to express a flux field, gt flux around medial points can be roughly expressed as (x1, y1) and (-x1, -y1) since they have opposite direction. image From -x1 to x1 or -y1 to y1, network hardly gets the sharp transition, tending to get smooth transition. norm = sqrt( x**2 + y**2), so pred norm between medial points (x to -x) or boundary points (-x to x) is very small, but the angle is still correct (we only use angle information for image segmentation). Again, norm gt at each pixel is 1, ‘norm gt’ in the picture is a distance transform map before normalization.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Loss Functions — ML Glossary documentation - Read the Docs
Cross-Entropy¶. Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1....
Read more >
L2 loss vs. mean squared loss - Data Science Stack Exchange
Function L2(x):=‖x‖2 is a norm, it is not a loss by itself. It is called a "loss" when it is used in a...
Read more >
NORM.DIST Function - Formula, Examples, Calculate ...
DIST function is useful in stock market analysis. In investing, we need to balance between risk and return and aim for the best...
Read more >
Understanding Gradient Clipping (and How It Can Fix ...
Gradient clipping ensures the gradient vector g has norm at most equal to threshold. This helps gradient descent to have reasonable behavior even...
Read more >
Lecture 3 | Loss Functions and Optimization - YouTube
Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found