Testing: AxonDeepSeg CLI doesn't test `default_TEM` model.
See original GitHub issueThe CLI tests for our default_SEM
are covered by AxonDeepSeg. However, our default_TEM model
is not tested by the CLI see here.
Is there a particular reason we are not testing our TEM model or is it just that these tests are skipped?
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (7 by maintainers)
Top Results From Across the Web
Introduction — AxonDeepSeg 4.1.0 documentation
AxonDeepSeg is an open-source software using deep learning and aiming at automatically segmenting axons and myelin sheaths from microscopy images. It performs 3 ......
Read more >AxonDeepSeg v4 · Issue #523 - GitHub
Identify tests that will need to be modified for them to work with the new ADS version. Models/Segmentations. Create models using IVADOMED ...
Read more >AxonDeepSeg Documentation - Read the Docs
Note: AxonDeepSeg is not compatible with Windows due to third-party dependencies. AxonDeepSeg was tested with Mac OS and Linux. 3.1 Python.
Read more >AxonDeepSeg: automatic axon and myelin segmentation from ...
and (iv) most of the previous methods are not publicly available (to ... model, training was done on rat spinal cord samples and...
Read more >Deep-learning based segmentation of challenging myelin ...
Our model is based on the architecture of the U-Net network. Our main contribution consists in using transfer learning in the encoder part...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
we could have more extensive tests, ran e.g. once a day nightly, that would include those functional tests. i think it is important to test all models-- what if, e.g., the model changes one day? or if one library screws up the prediction of one particular model? we want to catch that.
some useful reference of nightly cron job for SCT:
tagging @joshuacwnewton so he is aware of this discussion
Good point! Maybe we should evaluate the dice coefficient against the test datasets as an assertion for each model? It will fail whenever we update models, but at least we’ll be aware ASAP otherwise.