Memory Error
See original GitHub issueHi guys, I’m getting this error while training. It happens in the first step of the first epoch and I don’t know why, I’ve been using the same dataset and annotation and never happened before.
Traceback:
File "keras_retinanet/bin/train.py", line 278, in <module>
main()
File "keras_retinanet/bin/train.py", line 274, in main
callbacks=callbacks,
File "/home/martin/.local/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/martin/.local/lib/python3.5/site-packages/keras/engine/training.py", line 2145, in fit_generator
generator_output = next(output_generator)
File "/home/martin/.local/lib/python3.5/site-packages/keras/utils/data_utils.py", line 770, in get
six.reraise(value.__class__, value, value.__traceback__)
File "/home/martin/.local/lib/python3.5/site-packages/six.py", line 693, in reraise
raise value
File "/home/martin/.local/lib/python3.5/site-packages/keras/utils/data_utils.py", line 635, in _data_generator_task
generator_output = next(self._generator)
File "keras_retinanet/bin/../../keras_retinanet/preprocessing/generator.py", line 240, in __next__
return self.next()
File "keras_retinanet/bin/../../keras_retinanet/preprocessing/generator.py", line 251, in next
return self.compute_input_output(group)
File "keras_retinanet/bin/../../keras_retinanet/preprocessing/generator.py", line 235, in compute_input_output
targets = self.compute_targets(image_group, annotations_group)
File "keras_retinanet/bin/../../keras_retinanet/preprocessing/generator.py", line 203, in compute_targets
labels_group[index], annotations, anchors = self.anchor_targets(max_shape, annotations, self.num_classes(), mask_shape=image.shape)
File "keras_retinanet/bin/../../keras_retinanet/preprocessing/generator.py", line 192, in anchor_targets
return anchor_targets_bbox(image_shape, annotations, num_classes, mask_shape, negative_overlap, positive_overlap, **kwargs)
File "keras_retinanet/bin/../../keras_retinanet/utils/anchors.py", line 36, in anchor_targets_bbox
overlaps = compute_overlap(anchors, annotations[:, :4])
File "keras_retinanet/bin/../../keras_retinanet/utils/anchors.py", line 209, in compute_overlap
iw = np.minimum(np.expand_dims(a[:, 2], axis=1), b[:, 2]) - np.maximum(np.expand_dims(a[:, 0], 1), b[:, 0]) + 1
MemoryError
class file:
Nucleo,0
Type_2,1
Type_3,2
Type_4,3
Dataset has 411405 lines, but as I said it never failed before.
Issue Analytics
- State:
- Created 5 years ago
- Comments:11 (6 by maintainers)
Top Results From Across the Web
How to Handle the MemoryError in Python
A MemoryError means that the interpreter has run out of memory to allocate to your Python program. This may be due to an...
Read more >Memory error
Memory gaps and errors refer to the incorrect recall, or complete loss, of information in the memory system for a specific detail and/or...
Read more >memory error in python
A memory error means that your program has ran out of memory. This means that your program somehow creates too many ...
Read more >Complete Guide to Python Memory Error
Most often, Memory Error occurs when the program creates any number of objects, and the memory of the RAM runs out. When working...
Read more >How to Fix the Memory Management Error in Windows 10
As the name suggests, the memory management error relates to the computer's memory, which can be a physical problem with the installed RAM....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve tried downloading the new version but it’s the same. I’m Using gpu instead of cpu. It does not even pass the third step so it’s impossible for me that it runs out of memory. I’ve done the same training yesterday and it worked perfectly.
By the third step it uses 6587MiB of GPU memory (It has 8GB).
In the previous training I’ve used the same dataset, but the annotation file was smaller, Could be that the problem? Since the annotation file in the training which throws that error is larger.
annotations file in the working training: 15980 lines Annotations file in the training with memory error: 411405 lines
images are 1388x1040 but it does not seems to be the problem since both uses the same.
Still stuck in this issue, my image size is 800x800, batch size is 1 and it still full