tensorflow termination; what(): std::bad_alloc
See original GitHub issueHi, I don’t have nvidia so I’ve been trying to use TensorFlow backend with --mrf-w=0 to speed things up (Theano works but really slow), but I get this error (tested with many different images that all worked using Theano backend)… any ideas how to fix it?
xxx:~/Code/python/neural-image-analogies$ make_image_analogy.py images/1.jpg images/1.jpg images/2.jpg out/arch --mrf-w=0
Using TensorFlow backend.
Using PatchMatch model
Scale factor 0.25 "A" shape (1, 3, 603, 653) "B" shape (1, 3, 300, 225)
Building loss...
Precomputing static features...
Building and combining losses...
Start of iteration 0 x 0
Current loss value: 62929842176.0
Image saved as out/arch_at_iteration_0_0.png
Iteration completed in 1359.27 seconds
Start of iteration 0 x 1
Current loss value: 59368124416.0
Image saved as out/arch_at_iteration_0_1.png
Iteration completed in 1354.37 seconds
Start of iteration 0 x 2
Current loss value: 58041049088.0
Image saved as out/arch_at_iteration_0_2.png
Iteration completed in 1315.46 seconds
Start of iteration 0 x 3
Current loss value: 57320632320.0
Image saved as out/arch_at_iteration_0_3.png
Iteration completed in 1324.93 seconds
Start of iteration 0 x 4
Current loss value: 56854339584.0
Image saved as out/arch_at_iteration_0_4.png
Iteration completed in 990.21 seconds
/home/xxx/Code/python/neural-image-analogies/venv/local/lib/python2.7/site-packages/scipy/ndimage/interpolation.py:573: UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed.
"the returned array has changed.", UserWarning)
Scale factor 0.625 "A" shape (1, 3, 1508, 1633) "B" shape (1, 3, 751, 563)
Building loss...
Precomputing static features...
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
Issue Analytics
- State:
- Created 7 years ago
- Reactions:1
- Comments:6
Top Results From Across the Web
terminate called after throwing an instance of 'std::bad_alloc ...
Hi, I'm using Keras with tensorflow back-end to train a LSTM network, I was doing a grid search over the learning_rate and dropout...
Read more >Why i am getting "terminate called after throwing an instance ...
The error implies that your running out memory. special considerations: In some instances, it could be caused by memory fragmentation in ...
Read more >Tensorflow serving in Kubernetes deployment fails to predict ...
terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc /usr/bin/tf_serving_entrypoint.sh: line 3: 7 Aborted ...
Read more >terminate called after throwing an instance of 'std::bad_alloc ...
std:bad_alloc(ation) mean the compiler ran out of ram to allocate, this is usually due to an unexecutable line of code. What I did...
Read more >Terminate Called After Throwing an Instance of 'Std::bad_alloc'
This particular error is an indication that the task is taking most of your RAM. Because of this, there is no memory for...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I have the same problem and my ram is 5.55GB
timchan@ubuntu:~/workspaces/dl/tf$ python3 full_code.py Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz terminate called after throwing an instance of ‘std::bad_alloc’ what(): std::bad_alloc Aborted (core dumped)
That looks like it’s running out of memory when trying to initialize the larger image, try scaling down the size of your source images by 50% and see if it still works. How much memory does your system have?