question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

An error ocurred while starting the kernel

See original GitHub issue

Description of your problem

My code was running fine in the beginning, but after maybe 20 seconds, it shows following errors:

2017󈚰󈚲 12:55:35.692732: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017󈚰󈚲 12:55:35.692779: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017󈚰󈚲 12:55:35.692785: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017󈚰󈚲 12:55:35.692789: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017󈚰󈚲 12:55:35.692793: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017󈚰󈚲 12:55:35.777783: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (𔂫), but there must be at least one NUMA node, so returning NUMA node zero 2017󈚰󈚲 12:55:35.778333: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: name: GeForce GTX 950M major: 5 minor: 0 memoryClockRate (GHz) 1.124 pciBusID 0000:0a:00.0 Total memory: 3.95GiB Free memory: 3.67GiB 2017󈚰󈚲 12:55:35.778357: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 2017󈚰󈚲 12:55:35.778371: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y 2017󈚰󈚲 12:55:35.778382: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) ‑> (device: 0, name: GeForce GTX 950M, pci bus id: 0000:0a:00.0) 2017󈚰󈚲 12:55:40.437879: I tensorflow/stream_executor/dso_loader.cc:129] Couldn’t open CUDA library libcupti.so.8.0. LD_LIBRARY_PATH: 2017󈚰󈚲 12:55:40.437937: F ./tensorflow/stream_executor/lib/statusor.h:212] Non‑OK‑status: status_ status: Failed precondition: could not dlopen DSO: libcupti.so.8.0; dlerror: libcupti.so.8.0: cannot open shared object file: No such file or directory What steps will reproduce the problem?

  1. Open spyder from Anaconda Navigator
  2. Run my code, in the beginning, it was running fine, but after maybe 20 seconds, it shows these errors

What is the expected output? What do you see instead?

Please provide any additional information below

Versions and main components

  • Spyder Version:3.2.4
  • Python Version:3.5
  • Qt Version:5.6.2
  • PyQt Version:5.6.0
  • Operating system: Ubuntu 16.04

Dependencies

Please go to the menu entry Help > Optional Dependencies (or Help > Dependencies), press the button Copy to clipboard and paste the contents below: IPython >=4.0 : 6.2.1 (OK) cython >=0.21 : None (NOK) jedi >=0.9.0 : 0.11.0 (OK) nbconvert >=4.0 : 5.3.1 (OK) numpy >=1.7 : 1.13.3 (OK) pandas >=0.13.1 : 0.21.0 (OK) psutil >=0.3 : 5.4.1 (OK) pycodestyle >=2.3: 2.3.1 (OK) pyflakes >=0.6.0 : 1.6.0 (OK) pygments >=2.0 : 2.2.0 (OK) pylint >=0.25 : 1.7.4 (OK) qtconsole >=4.2.0: 4.3.1 (OK) rope >=0.9.4 : 0.10.7 (OK) sphinx >=0.6.6 : 1.6.3 (OK) sympy >=0.7.3 : None (NOK)

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
estathopcommented, Oct 16, 2018

the case is I loaded a keras model which occupied my whole GPU Memory, then used

cuda.select_device(0)
cuda.close()

in order to try to release the GPU memory and give it to a pytorch model I overcame this problem with a naive way

0reactions
CAM-Gerlachcommented, Oct 16, 2018

Okay, thanks for clarifying further and glad you found a solution!

Just to be clear, Spyder has no control over your GPU memory or (to our knowledge) how Keras and Pytorch use it, as you seem to have figured out.

Read more comments on GitHub >

github_iconTop Results From Across the Web

"An error ocurred while starting the kernel" about wrong ...
Problem Description To create a new virtual environment, use the following command: conda create -n python3.8.12 python=3.8.12 spyder=5.2.1 ...
Read more >
Why am I getting "An error ocurred while starting the kernel" in ...
However, while running my Python code from the Spyder console, I am getting the following error: An error occurred while starting the kernel....
Read more >
Common Illnesses — Spyder 5 documentation
If you receive the message An error occurred while starting the kernel in the IPython Console, Spyder was unable to launch a new...
Read more >
I need help Spyder - An error ocurred while starting the kernel
It seems like your keyboard language is replacing dash signs with something the terminal does not understand... Try to copy and paste instead...
Read more >
IPython console: an error ocurred while starting the kernel Either
IPython console: an error ocurred while starting the kernel Either: Your IPython frontend and kernel versions are incompatible · teng li · Carlos...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found