more dice score definition
See original GitHub issueSubject of the feature
For the dice, the current definition is https://github.com/DeepRegNet/DeepReg/blob/main/deepreg/model/loss/label.py#L149. It corresponds to the definition in https://arxiv.org/abs/1707.03237. But in https://arxiv.org/abs/1606.04797, the terms in the denominator are squared.
Two forms are identical in the binary case as the square of 1 is 1.
But when y_true
and y_pred
are probabilities. As far as I know, there’s no closed-form. Both formulations are an approximation.
In https://en.wikipedia.org/wiki/Sørensen–Dice_coefficient, dice score is defined as 2TP/(2TP+FP+FN)
. If we consider N i.i.d. voxel with y_true=p
, which means with 100*p%, the voxel is foreground. Similarly, let y_pred=q
. Then we expect TP=pq*N, FP=(1-p)q*N, FN=p(1-q)*N
. This gives the non-squared version.
Therefore it could be nice to provide the square option in case anyone wants it. No need to change the default definition.
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (7 by maintainers)
Top GitHub Comments
There is no bullet-proof answer to the question of how to implement the soft Dice for deep learning purposes. Different justification lead to different choices (square, non-square, what epsilons and where, foreground only or average between foreground and background dice, how to handle multi-classes, batch version, etc.). Many of the issues arise when you have ground truth with 0 foreground voxels. At least in our experience, the batch version of the soft Dice really helps the training.
If you are interested in that rabbit hole, here are some pointers: https://github.com/Project-MONAI/MONAI/issues/807 https://arxiv.org/abs/1707.00478 Section 3.6.1 in https://mediatum.ub.tum.de/doc/1395260/1395260.pdf
EDIT: a new ticket may be better - sorry