question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItΒ collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error when training with XML/Pascal format Parsing

See original GitHub issue

Hello. I’m also having trouble using the newly updated version of the project with the xml generator

The error:

> ValueError                                Traceback (most recent call last)
> <ipython-input-9-480a1c690cb2> in <module>()
>       7                               epochs=epochs,
>       8                               callbacks=callbacks,
> ----> 9                               verbose=0)
> 
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
>      89                 warnings.warn('Update your `' + object_name +
>      90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
> ---> 91             return func(*args, **kwargs)
>      92         wrapper._original_function = func
>      93         return wrapper
> 
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
>    2175                     outs = self.train_on_batch(x, y,
>    2176                                                sample_weight=sample_weight,
> -> 2177                                                class_weight=class_weight)
>    2178 
>    2179                     if not isinstance(outs, list):
> 
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight)
>    1841             sample_weight=sample_weight,
>    1842             class_weight=class_weight,
> -> 1843             check_batch_axis=True)
>    1844         if self.uses_learning_phase and not isinstance(K.learning_phase(), int):
>    1845             ins = x + y + sample_weights + [1.]
> 
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
>    1424                                     self._feed_input_shapes,
>    1425                                     check_batch_axis=False,
> -> 1426                                     exception_prefix='input')
>    1427         y = _standardize_input_data(y, self._feed_output_names,
>    1428                                     output_shapes,
> 
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
>     108                         ': expected ' + names[i] + ' to have ' +
>     109                         str(len(shape)) + ' dimensions, but got array '
> --> 110                         'with shape ' + str(data_shape))
>     111                 if not check_batch_axis:
>     112                     data_shape = data_shape[1:]
> 
> ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (16, 1)

My generator funciton:


# 1: Instantiate to `BatchGenerator` objects: One for training, one for validation.

train_dataset = DataGenerator()
val_dataset = DataGenerator()

# Dataset dirs
train_images_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/training/dataset/'
train_ids_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/training/ids'
train_labels_files_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/training/annotations/'

# Validation dataset dirs
validation_images_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/validation/dataset/'
validation_ids_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/validation/ids'
validation_labels_files_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/validation/annotations/'

train_annotations_dirs = sorted(list_path(train_labels_files_dir))

validation_annotations_dirs = sorted(list_path(validation_labels_files_dir))

train_images_set_filenames = sorted(list_dir(train_ids_dir, 'txt'))
train_images_file_paths = sorted(list_path(train_images_dir))

validation_images_set_filenames = sorted(list_dir(validation_ids_dir, 'txt'))
validation_images_file_paths = sorted(list_path(validation_images_dir))

right_classes = [
        'ascending_stair',
        'descending_stair',
        'door',
        'double_door',
        'elevator_door',
        'half_opened_door',
        'opened_door'
]

filenames, labels, image_ids = train_dataset.parse_xml(
    images_dirs=train_images_file_paths,
    image_set_filenames=train_images_set_filenames,
    annotations_dirs=train_annotations_dirs,
    classes = right_classes,
    ret=True
)

v_filenames, v_labels, v_image_ids = val_dataset.parse_xml(
    images_dirs=validation_images_file_paths,
    image_set_filenames=validation_images_set_filenames,
    annotations_dirs=validation_annotations_dirs,
    classes = right_classes,
    ret=True
)

train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size   = val_dataset.get_dataset_size()

print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))

Again using the ssd_7 training notebook.

The parser correctly finds all images and etc:

set_filename_ascending_stairs.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 735/735 [00:01<00:00, 561.38it/s] set_filename_descending_stairs.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 198/198 [00:00<00:00, 673.99it/s] set_filename_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1869/1869 [00:02<00:00, 792.73it/s] set_filename_double_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 291/291 [00:00<00:00, 795.94it/s] set_filename_elevator_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 390/390 [00:00<00:00, 750.01it/s] set_filename_half_opened_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 227/227 [00:00<00:00, 786.35it/s] set_filename_opened_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 465/465 [00:00<00:00, 783.15it/s] set_filename_ascending_stairs.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 211/211 [00:00<00:00, 512.35it/s] set_filename_descending_stairs.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 45/45 [00:00<00:00, 659.41it/s] set_filename_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 445/445 [00:00<00:00, 784.51it/s] set_filename_double_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 68/68 [00:00<00:00, 680.39it/s] set_filename_elevator_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 108/108 [00:00<00:00, 762.51it/s] set_filename_half_opened_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 53/53 [00:00<00:00, 728.35it/s] set_filename_opened_doors.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 114/114 [00:00<00:00, 697.97it/s] Number of images in the training dataset: 4175 Number of images in the validation dataset: 1044

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:11 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
ParmeetSinghcommented, Aug 23, 2018

I did have the same error but turns out I was passing images of different width and height. I resized the all images to same width and height and it worked.

0reactions
ashah03commented, Jul 9, 2018

I’m having the same issue that @rodsnjr was having:

Epoch 1/20
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-18-dc621119a80e> in <module>()
     11                               validation_data=val_generator,
     12                               validation_steps=ceil(val_dataset_size/batch_size),
---> 13                               initial_epoch=initial_epoch)

/usr/local/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

/usr/local/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   2175                     outs = self.train_on_batch(x, y,
   2176                                                sample_weight=sample_weight,
-> 2177                                                class_weight=class_weight)
   2178 
   2179                     if not isinstance(outs, list):

/usr/local/lib/python3.6/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight)
   1841             sample_weight=sample_weight,
   1842             class_weight=class_weight,
-> 1843             check_batch_axis=True)
   1844         if self.uses_learning_phase and not isinstance(K.learning_phase(), int):
   1845             ins = x + y + sample_weights + [1.]

/usr/local/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
   1424                                     self._feed_input_shapes,
   1425                                     check_batch_axis=False,
-> 1426                                     exception_prefix='input')
   1427         y = _standardize_input_data(y, self._feed_output_names,
   1428                                     output_shapes,

/usr/local/lib/python3.6/site-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    108                         ': expected ' + names[i] + ' to have ' +
    109                         str(len(shape)) + ' dimensions, but got array '
--> 110                         'with shape ' + str(data_shape))
    111                 if not check_batch_axis:
    112                     data_shape = data_shape[1:]

ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (16, 1)

I’m using xml parsing from your data generator object.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Error when parsing XML - Lazarus Forum - Free Pascal
Hi everybody, I'm new to programming with freepascal/lazarus. I want to read nodes from an XML-File, but i get an Errormessage each time....
Read more >
test error on my voc-style dataset Β· Issue #221 - GitHub
When I use the following code to test on pascal voc 2007 dataset: python test.py ctdet --not_prefetch_test --dataset detection --exp_id det2Β ...
Read more >
How to Convert Annotations from PASCAL VOC XML to ...
In this post we will give you the code necessary to convert between two of the most common formats: VOC XML and COCO...
Read more >
Convert PASCAL VOC XML to YOLO for Object Detection
One of the major problem with PASCAL VOC XML annotations is that we cannot use it directly for training especially on object detection...
Read more >
Understanding PASCAL VOC Dataset - Section.io
PASCAL VOC dataset is used for object detection and segmentation. Its representation as XML files helps us customize datasets easily while usingΒ ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found