Remote Jupyter connection with long running command: Extension host terminated due to memory
See original GitHub issueEnvironment data
- VS Code version: 1.35.1
- Extension version (available under the Extensions sidebar): 2019.5.18875
- OS and version: macOS Mojave 10.14.5
- Python version (& distribution if applicable, e.g. Anaconda): 3.6.8 (Anaconda)
- Type of virtual environment used (N/A | venv | virtualenv | conda | …): conda
- Relevant/affected Python packages and their versions: JupyterLab 0.35.6
- Jedi or Language Server? (i.e. what is
"python.jediEnabled"
set to; more info microsoft/vscode-python#3977): True
Expected behaviour
When connecting to a remote Jupyter server, I expect the VS Code extension host not to terminate unexpectedly, even when I run long running commands remotely (> 50 minutes), for example training machine learning models using Tensorflow.
Actual behaviour
When connecting to a remote Jupyter server and executing long running (> 50 minutes) commands like model.fit(), after approx. 50 minutes I receive a popup ‘Extension host terminated unexpectedly’ with the option to restart the extension host. The connection to the Jupyter session is lost and I receive no further output from the long running command, even if I restart the extension host. I cannot reconnect to the kernel which seems to be still running (as I can see on the server). I also cannot interrupt or restart this kernel, as the corresponding buttons show no reaction. To continue working I have to kill the running kernel on the Jupyter server. When I have done this and try to start another Jupyter session from VS Code, I receive the message ‘Cannot execute code, session has been disposed.’
Steps to reproduce:
- Setup a python file with ipython cells.
- Configure an external Jupyter server (specify Jupyter server URI)
- Execute a command via IPython running longer than 50 minutes.
Logs
No output in the output panel
Issue Analytics
- State:
- Created 4 years ago
- Comments:9 (4 by maintainers)
Top GitHub Comments
Thanks that helps a lot.
We have this exact same issue - seems not to be solved as of today…