question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Attribute 'thing_classes' does not exist in the metadata of dataset: metadata is empty.

See original GitHub issue

Hello,

I am testing your examples/domain_adaptation/object_detection/d_adapt/d_adapt.py method on my custom dataset (30 classes), which i converted to VOC format. Initially, I trained it on source-only.py successfully, but when trying to run d-adapt.py, I receive the following error.

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/opt/rh/rh-python38/root/usr/local/lib64/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
    fn(i, *args)
  File "/scratch/project_2005695/detectron2/detectron2/engine/launch.py", line 126, in _distributed_worker
    main_func(*args)
  File "/scratch/project_2005695/Transfer-Learning-Library/examples/domain_adaptation/object_detection/d_adapt/d_adapt.py", line 272, in main
    train(model, logger, cfg, args, args_cls, args_box)
  File "/scratch/project_2005695/Transfer-Learning-Library/examples/domain_adaptation/object_detection/d_adapt/d_adapt.py", line 131, in train
    classes = MetadataCatalog.get(args.targets[0]).thing_classes
  File "/scratch/project_2005695/detectron2/detectron2/data/catalog.py", line 131, in __getattr__
    raise AttributeError(
AttributeError: Attribute 'thing_classes' does not exist in the metadata of dataset '.._datasets_TLESS_real_dataset_trainval': metadata is empty.

I have registered the base class in tllib/vision/datasets/object_detection/__init__.py same way as in the provided CityScapesBase class:

class TLessBase:
    class_names = ('Model 1', 'Model 2', 'Model 3', 'Model 4', 'Model 5',
                'Model 6', 'Model 7', 'Model 8', 'Model 9', 'Model 10', 'Model 11',
                'Model 12', 'Model 13', 'Model 14', 'Model 15', 'Model 16', 'Model 17',
                'Model 18', 'Model 19', 'Model 20', 'Model 21', 'Model 22', 'Model 23',
                'Model 24', 'Model 25', 'Model 26', 'Model 27', 'Model 28', 'Model 29', 'Model 30'
                )

    def __init__(self, root, split="trainval", year=2007, ext='.jpg'):
        self.name = "{}_{}".format(root, split)
        self.name = self.name.replace(os.path.sep, "_")
        if self.name not in MetadataCatalog.keys():
            register_pascal_voc(self.name, root, split, year, class_names=self.class_names, ext=ext,
                                bbox_zero_based=True)
            MetadataCatalog.get(self.name).evaluator_type = "pascal_voc"

And then the target and the test classes inherit from it.

Could you please suggest what I am missing?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6

github_iconTop GitHub Comments

1reaction
JunguangJiangcommented, Mar 31, 2022

D-adapt was not trained with multiple GPUs before, so we didn’t find this problem. It seems that moving the following code from if __name__ == "__main__": into the main function will solve the problem.

    args.source = utils.build_dataset(args.source[::2], args.source[1::2])
    args.target = utils.build_dataset(args.target[::2], args.target[1::2])
    args.test = utils.build_dataset(args.test[::2], args.test[1::2])
0reactions
darkhan-scommented, Apr 7, 2022

Sorry for late response, have been testing other methods.

One possible reason is that you fail to load the model when visualizing the results. A suggestion is that you can use tensorboard to watch the detection results. For instance,

tensorboard --logdir=logs

The problem is I don’t have a screen to monitor such results, I am training it on a server.

I guess there might be some issues with the processing of your dataset. You can print some intermediate results, such as instances in line 45 of the visualize.py file, to confirm what the original output of the model is.

This one works as expected: Instances(num_instances=100, image_height=540, image_width=720, fields=[pred_boxes: Boxes(tensor([[5.0100e+02, 1.8443e+02, 5.3791e+02, 2.2163e+02], ...)), scores: tensor([0.2554, ...)), pred_classes: tensor([6, ....])])

Read more comments on GitHub >

github_iconTop Results From Across the Web

Attribute 'thing_classes' does not exist in the metadata of ' ...
I'm applying the colab tutorial for EgoHands datasets with 4 classes. I wrote get_dicts function, then I verified that this functions works well...
Read more >
Training Detectron2 on part of COCO dataset - python
I suspect you are not getting any results from your training because your MetadataCatalog does not have the 'thing_classes' property set.
Read more >
Source code for detectron2.data.catalog
The returned dicts should be in Detectron2 Dataset format (See ... 1: raise AttributeError( "Attribute '{}' does not exist in the metadata of...
Read more >
Bug in detectron 2 COCO keypoints detection
AttributeError : Attribute 'keypoint_names' does not exist in the metadata of dataset 'points'. Available keys are dict_keys(['name', 'json_file' ...
Read more >
detectron2/data/catalog.py · CVPR/regionclip-demo at main
f"Attribute '{key}' does not exist in the metadata of dataset '{self.name}': ". "metadata is empty." ) def __setattr__(self, key, val):.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found