Unable to fine-tune due to missing mask labels
See original GitHub issueHi, I am currently fine-tuning the pre-trained model (epoch36.pth) but I am encountering an error whenever I load my custom dataset generated using LabelImg.
Traceback (most recent call last):
File "tools/train.py", line 151, in <module>
main()
File "tools/train.py", line 147, in main
meta=meta)
File "/usr/local/lib/python3.6/dist-packages/mmdet-1.2.0+0f33c08-py3.6-linux-x86_64.egg/mmdet/apis/train.py", line 165, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/runner.py", line 384, in run
epoch_runner(data_loaders[i], **kwargs)
File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/runner.py", line 279, in train
for i, data_batch in enumerate(data_loader):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/mmdet-1.2.0+0f33c08-py3.6-linux-x86_64.egg/mmdet/datasets/custom.py", line 132, in __getitem__
data = self.prepare_train_img(idx)
File "/usr/local/lib/python3.6/dist-packages/mmdet-1.2.0+0f33c08-py3.6-linux-x86_64.egg/mmdet/datasets/custom.py", line 145, in prepare_train_img
return self.pipeline(results)
File "/usr/local/lib/python3.6/dist-packages/mmdet-1.2.0+0f33c08-py3.6-linux-x86_64.egg/mmdet/datasets/pipelines/compose.py", line 24, in __call__
data = t(data)
File "/usr/local/lib/python3.6/dist-packages/mmdet-1.2.0+0f33c08-py3.6-linux-x86_64.egg/mmdet/datasets/pipelines/loading.py", line 147, in __call__
results = self._load_masks(results)
File "/usr/local/lib/python3.6/dist-packages/mmdet-1.2.0+0f33c08-py3.6-linux-x86_64.egg/mmdet/datasets/pipelines/loading.py", line 125, in _load_masks
gt_masks = results['ann_info']['masks']
KeyError: 'masks'
I noticed specifically from the config file that the training pipeline requires masks to be enabled.
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
Is there something to be done when annotating using LabelImg that you guys did differently to indicate the existence of label masks? I saw the example provided and did the same but still getting an error about masks. I also set with_mask=False
but I don’t honestly know how relevant would that be to the whole training process.
Example annotation from LabelImg:
<annotation>
<folder>jpeg_images</folder>
<filename>acc_2018_fs_008.jpg</filename>
<path>/Users/rt/Desktop/99_annotated/jpeg_images/acc_2018_fs_008.jpg</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>4958</width>
<height>7017</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>borderless</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>993</xmin>
<ymin>1020</ymin>
<xmax>4223</xmax>
<ymax>5479</ymax>
</bndbox>
</object>
<object>
<name>cell</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>3559</xmin>
<ymin>1047</ymin>
<xmax>4021</xmax>
<ymax>1107</ymax>
</bndbox>
</object>
</annotation>
Thank you and I appreciate this awesome work by the way.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
Fine-tuning a masked language model - Hugging Face Course
Create a random sample of the unsupervised split and verify that the labels are neither 0 nor 1 . While you're at it,...
Read more >BERT Fine-Tuning Tutorial with PyTorch - Chris McCormick
This post will explain how you can modify and fine-tune BERT to create a powerful NLP model that quickly gives you state of...
Read more >How to fine tune BERT on unlabeled data? - nlp - Stack Overflow
E.g. ContrastLoss requires a pair of texts and a label, whether this pair is similar. Share.
Read more >Import label mask as annnotation in QuPath - Image.sc Forum
I am facing a “trivial” problem and can't seem to find any answer. The problem somehow boils down to importing a label mask...
Read more >Masked-Language Modeling With BERT | by James Briggs
BERT has enjoyed unparalleled success in NLP thanks to two unique training approaches, ... But often, we might need to fine-tune the model....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
We did not mention the segmentation vertices coordinates in the annotation. We used labelimg tool to annotate the images. You can find sample annotations here https://drive.google.com/drive/folders/1ID1sTk1VKHCBeeDEC_BzHBGRE7JkC0qq?usp=sharing
We then converted all of these annotations to json files using a custom script. We will be releasing that script soon
Iam getting the same error. Dont see any conclusion on how this error is solved. Can someone post the fix here?