Issues testing data with certain images
See original GitHub issueI am successfully able to test the image ‘test_14.png’ into the network, but for ‘test_13.png’ after I use infer.py I get a warning saying the instance type is of background type and the image does not load. Also when i enter compute_stats.py I just get the output in https://github.com/vqdang/hover_net/issues/13#issuecomment-541759157 even though the directories are correct and I followed the steps.
This implies the input is corrupt somehow. Either way, it shouldn’t produce this for test_14 right as I got the correct output image from this and there were no errors in inference?
So just to check, as the paths are definitely right, can I confirm how many test images should there be in the CoNSeP dataset? I had troubles unzipping the file so i’m not sure if it has become corrupt, i only get test_13 & test_14, perhaps causing the issue. Also, the output should give both .mat and .png files out right?
It could also be an error with Numpy as when I run infer.py I get this:
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)])
UPDATE: Although now I can’t even seem to get test_14.png to work…
Issue Analytics
- State:
- Created 4 years ago
- Comments:15 (6 by maintainers)
Top GitHub Comments
For the instance segmentation stats, we have added an argument into the
run_nuclei_inst_stat()
function. To print image stats, setprint_img_stats=True
in the argument. This function is located here:https://github.com/vqdang/hover_net/blob/master/src/compute_stats.py#L206
Note, this is only for the instance segmentation metric.
In terms of the classification metric, the score is for the entire dataset - we don’t calculate the stats for each image because that wouldn’t make sense here. For instance, nuclei types are often spread out across the images and often don’t exist at the same time within one image. For 1 image, in the case that nuclei type
t
doesn’t exist, F1 is always zero (no TP and no TN possibly exist), but then how will you reflect FP of such classification on the overall score of the entire dataset? If you ignore it then that means you are skewing the actual performance of the model. However, you can report raw TP TN FP FN for each nuclei type of each image. This way you won’t miss out FP, but then you lose the summarizing power of F1. Displaying them (4 metrics for 4 types means 16 columns to track for just 1 model) for comparison of many models won’t be easy.I’m sorry but please check this to deal with the .mat problem. https://github.com/vqdang/hover_net/issues/16#issuecomment-551842003 . We will resolve this sometimes later to align all input format.