Error when training with XML/Pascal format Parsing
See original GitHub issueHello. Iβm also having trouble using the newly updated version of the project with the xml generator
The error:
> ValueError Traceback (most recent call last)
> <ipython-input-9-480a1c690cb2> in <module>()
> 7 epochs=epochs,
> 8 callbacks=callbacks,
> ----> 9 verbose=0)
>
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
> 89 warnings.warn('Update your `' + object_name +
> 90 '` call to the Keras 2 API: ' + signature, stacklevel=2)
> ---> 91 return func(*args, **kwargs)
> 92 wrapper._original_function = func
> 93 return wrapper
>
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
> 2175 outs = self.train_on_batch(x, y,
> 2176 sample_weight=sample_weight,
> -> 2177 class_weight=class_weight)
> 2178
> 2179 if not isinstance(outs, list):
>
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight)
> 1841 sample_weight=sample_weight,
> 1842 class_weight=class_weight,
> -> 1843 check_batch_axis=True)
> 1844 if self.uses_learning_phase and not isinstance(K.learning_phase(), int):
> 1845 ins = x + y + sample_weights + [1.]
>
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
> 1424 self._feed_input_shapes,
> 1425 check_batch_axis=False,
> -> 1426 exception_prefix='input')
> 1427 y = _standardize_input_data(y, self._feed_output_names,
> 1428 output_shapes,
>
> /home/rodsnjr/miniconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
> 108 ': expected ' + names[i] + ' to have ' +
> 109 str(len(shape)) + ' dimensions, but got array '
> --> 110 'with shape ' + str(data_shape))
> 111 if not check_batch_axis:
> 112 data_shape = data_shape[1:]
>
> ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (16, 1)
My generator funciton:
# 1: Instantiate to `BatchGenerator` objects: One for training, one for validation.
train_dataset = DataGenerator()
val_dataset = DataGenerator()
# Dataset dirs
train_images_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/training/dataset/'
train_ids_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/training/ids'
train_labels_files_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/training/annotations/'
# Validation dataset dirs
validation_images_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/validation/dataset/'
validation_ids_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/validation/ids'
validation_labels_files_dir = '/media/rodsnjr/Files/Datasets/gvc_dataset_full/validation/annotations/'
train_annotations_dirs = sorted(list_path(train_labels_files_dir))
validation_annotations_dirs = sorted(list_path(validation_labels_files_dir))
train_images_set_filenames = sorted(list_dir(train_ids_dir, 'txt'))
train_images_file_paths = sorted(list_path(train_images_dir))
validation_images_set_filenames = sorted(list_dir(validation_ids_dir, 'txt'))
validation_images_file_paths = sorted(list_path(validation_images_dir))
right_classes = [
'ascending_stair',
'descending_stair',
'door',
'double_door',
'elevator_door',
'half_opened_door',
'opened_door'
]
filenames, labels, image_ids = train_dataset.parse_xml(
images_dirs=train_images_file_paths,
image_set_filenames=train_images_set_filenames,
annotations_dirs=train_annotations_dirs,
classes = right_classes,
ret=True
)
v_filenames, v_labels, v_image_ids = val_dataset.parse_xml(
images_dirs=validation_images_file_paths,
image_set_filenames=validation_images_set_filenames,
annotations_dirs=validation_annotations_dirs,
classes = right_classes,
ret=True
)
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
Again using the ssd_7 training notebook.
The parser correctly finds all images and etc:
set_filename_ascending_stairs.txt: 100%|ββββββββββ| 735/735 [00:01<00:00, 561.38it/s] set_filename_descending_stairs.txt: 100%|ββββββββββ| 198/198 [00:00<00:00, 673.99it/s] set_filename_doors.txt: 100%|ββββββββββ| 1869/1869 [00:02<00:00, 792.73it/s] set_filename_double_doors.txt: 100%|ββββββββββ| 291/291 [00:00<00:00, 795.94it/s] set_filename_elevator_doors.txt: 100%|ββββββββββ| 390/390 [00:00<00:00, 750.01it/s] set_filename_half_opened_doors.txt: 100%|ββββββββββ| 227/227 [00:00<00:00, 786.35it/s] set_filename_opened_doors.txt: 100%|ββββββββββ| 465/465 [00:00<00:00, 783.15it/s] set_filename_ascending_stairs.txt: 100%|ββββββββββ| 211/211 [00:00<00:00, 512.35it/s] set_filename_descending_stairs.txt: 100%|ββββββββββ| 45/45 [00:00<00:00, 659.41it/s] set_filename_doors.txt: 100%|ββββββββββ| 445/445 [00:00<00:00, 784.51it/s] set_filename_double_doors.txt: 100%|ββββββββββ| 68/68 [00:00<00:00, 680.39it/s] set_filename_elevator_doors.txt: 100%|ββββββββββ| 108/108 [00:00<00:00, 762.51it/s] set_filename_half_opened_doors.txt: 100%|ββββββββββ| 53/53 [00:00<00:00, 728.35it/s] set_filename_opened_doors.txt: 100%|ββββββββββ| 114/114 [00:00<00:00, 697.97it/s] Number of images in the training dataset: 4175 Number of images in the validation dataset: 1044
Issue Analytics
- State:
- Created 5 years ago
- Comments:11 (4 by maintainers)
Top GitHub Comments
I did have the same error but turns out I was passing images of different width and height. I resized the all images to same width and height and it worked.
Iβm having the same issue that @rodsnjr was having:
Iβm using xml parsing from your data generator object.