Annotation of 2d images with MONAILabel and 3d slicer
See original GitHub issueIs there any MONAILabel app for annotating 2d(png, jpeg) images ?
I worked on annotating 3d(nifti) images with MONAILabel and 3d slicer. It is a very nice product and i liked it.
I am exploring the annotation of 2d images with MONAILabel but i could not find the work done on the same. So, Any help about the same would be appreciated. Like links to previous works or guidance for me to achieve the same.
My current work:
I have done few changes to the existing code in deepedit/main.py
file like in_channles
, spatial_dimentsions
of UNet model. But i am facing errors related to image headers. looks like my loaded dataset is still getting considered as nifti.
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:20
Top Results From Across the Web
How to use 2d images in 3d slicer for segmentation? #851
I tried to use some png files and annotated using segment editor. But the labels are saved in nifti format. So is there...
Read more >MONAI Label - Product Page
3D Slicer, OHIF, DSA, and QuPath. Whether you're annotating Radiology or Pathology images, MONAI Label has viewer integration to get you started quickly....
Read more >Build AI-Assisted Annotation Models with MONAI Label
MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI. As a part of Project MONAI, ...
Read more >Image Segmentation - 3D Slicer documentation - Read the Docs
Segmentation of images (also known as contouring or annotation) is a procedure to delinate regions in the image, typically corresponding to anatomical ...
Read more >MONAI Label: A framework for AI-assisted Interactive ... - arXiv
at reducing the time required to annotate 3D medical image ... Currently, MONAI Label readily supports locally installed (3DSlicer) and ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
actually it should be very simple… for deepgrow 2D model, we train on 2D images… basically take z dimension and get all the 2D slices and train a model…
For pathology it’s all 2D… may be WSI inference is the extra thing… which we don’t have to worry in case of smaller 2D images… It’s all how you craft your pre-transforms to created the required input to train/infer a model…
Once you have the model ready as part of monailabel server app… then you can try using direct apis in http://127.0.0.1:8000/ to run basic infer/train actions… and later you can see something similar working in any clients which can support those images/label masks for rendering
If in the 2D image you have origin, spacing, orientation information or the image is large (>4GB), or use bit depth >8 then I would use nrrd/nifti/DICOM, because these 3D formats can store all these data in a standard way, while consumer image file formats (png, jpg, etc.) struggle.
However, if the user just works with uncalibrated RGB images (for example photos) then it would make sense to allow MONAILabel to use that format and not require the user to convert to/from nifti.