ResourceExhaustedError after segmentation models update!
See original GitHub issueHi! I was working FPN with ‘resnext101’ backbone on Google Colab. I’ve trained the model and have done lots of experiments and the results were very good. Today, after I updated the segmentation models (actually, every time I use Google Colab, I have to reinstall it) I got the following error shown below. By the way, I tried to use Unet with ‘vgg16’ backbone and everything went well. I wonder why FPN with resnext101 backbone does not fit GPU memory as it fit two days ago.
Thank you very much @qubvel .
Edit1: FPN with vgg16 backbone is OK. FPN with vgg19 backbone is OK. FPN with resnet34 backbone is OK. FPN with resnet50 backbone is NOT OK (The same error is shown below). FPN with resnet101 backbone is NOT OK (The same error is shown below). FPN with resnext50 backbone is NOT OK (The same error is shown below).
Edit2: The related StackOverflow question.
Epoch 1/100
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
<ipython-input-22-1b2892f8cab2> in <module>()
----> 1 get_ipython().run_cell_magic('time', '', 'history = model.fit_generator(\n generator = zipped_train_generator,\n validation_data=(X_validation, y_validation),\n steps_per_epoch=len(X_train) // NUM_BATCH,\n callbacks= callbacks_list,\n verbose = 1,\n epochs = NUM_EPOCH)')
9 frames
</usr/local/lib/python3.6/dist-packages/decorator.py:decorator-gen-60> in time(self, line, cell, local_ns)
<timed exec> in <module>()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
1456 ret = tf_session.TF_SessionRunCallable(self._session._session,
1457 self._handle, args,
-> 1458 run_metadata_ptr)
1459 if run_metadata:
1460 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[32,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training/RMSprop/gradients/zeros_21}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[loss/mul/_11081]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[32,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training/RMSprop/gradients/zeros_21}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
Issue Analytics
- State:
- Created 4 years ago
- Comments:18 (11 by maintainers)
Yeah, thats strange…
pip install -U segmentation-models==0.2.1