question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Intellisense speedup proposals

See original GitHub issue

Environment data

  • VS Code version: 1.28.2

  • Extension version (available under the Extensions sidebar): 2018.9.1 / issues observed since May/June 2018

  • OS and version: Windows 10, latest update

  • Python version (& distribution if applicable, e.g. Anaconda): plain Python 3.7.0 64-bit

  • Type of virtual environment used (N/A | venv | virtualenv | conda | …): venv

  • Relevant/affected Python packages and their versions:

    • atomicwrites==1.2.1
    • attrs==18.2.0
    • colorama==0.4.0
    • cycler==0.10.0
    • flake8==3.5.0
    • imbalanced-learn==0.4.1
    • kiwisolver==1.0.1
    • matplotlib==3.0.0
    • mccabe==0.6.1
    • more-itertools==4.3.0
    • numpy==1.15.2
    • pandas==0.23.4
    • pluggy==0.7.1
    • progressbar2==3.38.0
    • py==1.6.0
    • pycodestyle==2.3.1
    • pyflakes==1.6.0
    • pyparsing==2.2.2
    • pytest==3.8.2
    • python-dateutil==2.7.3
    • python-utils==2.3.0
    • pytz==2018.5
    • scikit-learn==0.20.0
    • scipy==1.1.0
    • seaborn==0.9.0
    • six==1.11.0

Actual behavior

Intellisense performance downgrade. It takes around 30s to search for particular docstring. IDE just display ‘Loading…’ window frame.

Expected behavior

Intellisense completion to be super-duper fast.

Steps to reproduce:

Use many, heavy dependencies import statements inside a module. i.e.

from sklearn.impute import SimpleImputer
from imblearn import under_sampling, over_sampling, combine
from sklearn import preprocessing
from sklearn.decomposition import PCA
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import classification_report

Logs

Outside interpreter runtime.

Message

###Dear developers, contributors, users

I fell in love with VSCode, but recent Intellisense performance drop, puts my feeling to test. I don’t know much about Intellisense implementation, but my guess is, that it does lazy evaluation of docstrings in imported modules, when displaying available prompts. Certainly this improves memory consumption. Personally, I use VSCode for ML/CV related projects in Python. Python is by far, most reasonable tool for such field, because there’re a lot of great libs. Downside is that my whole lib stack (NumPy, SciPy, Pandas, Scikit-Learn, Scikit-Image, OpenCV, Imbalanced-Learn, Matplotlib, Seaborn, Keras, Tensorflow, PyTorch, etc, and much more of their dependencies), is quite heavy. Don’t know much about other languages/implementations, but in CPython, it takes few seconds to startup a machine and load up dependencies before execution.

As developers, we mostly work on decent machines (16GB at least, I think 32/64GB is standard for ML/CV applications). For me, it wouldn’t be much of expense (just a few GB of RAM) to load doc-strings from all project dependencies eagerly, and pre-cache them. Certainly, startup will take few minutes, but I can use that time to make a coffee (which I eventually always do, when I start my work). I’d rather waste few minutes for startup (which is one-time expense), and rejoice later upon Intellisense great performance, than pulling my hair out, during whole day of work.

###PS To be more exact: we’re observing 15-30 second lag, which makes Intellisense practically unusable.

###PS2 to keep startup time in place, we could only pre-cache docstrings from opened files. Import statements could be traversed one by one, and docstrings collected. Imported dependencies should be tracked to re-evaluate docstrings cache. This would keep RAM usage in place too.

Final performance would be also CPU/RAM efficient, as docstring should be kept in dictionary-like collection. Dictionaries are sweet and fast, and we would only keep docstrings as values (strings aren’t that much heavy). When prompting Intellisense, it would fetch docstring from cache and render it. Accessing values in dictionary is super-super-super-fast, and rendering wouldn’t also be a problem

Cache should be shared among opened files/modules.

###PS3 To make it even more efficient: when considering interpreted languages i.e. Python, JS, etc., we don’t even need to import those dependencies. They’re not compiled/pre-compiled. We could just as easily load them as text files, and fetch docstrings using simple regex.

Tracking dependencies wouldn’t also be much of a problem. It could simply be done using regex and trivial parsing.

###PS4 I apologize, if you find my attitude to be demanding, and leech in nature. I’d be happy to help to contribute, but I don’t hold much of experience in both front-end and IDE development.

Kind Regards, Piotr Rarus

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:6
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
piotr-raruscommented, Oct 24, 2018

Eventually I did:

"python.jediEnabled": true,
"python.jediMemoryLimit": 4096,

It helped.

I suspect, that extension developers, got some guidelines from Microsoft to keep memory usage low. After all VSCode aims to be fast and lightweight IDE. From what I’ve gathered, when observing memory consumption, the default memory limit (1024 MB), isn’t set for Intellisense only, but for IDE itself. This leaves us with ~200MB for Intellisense, which is a little to low. When memory consumption exceeds the cap, something strange happens, and Intellisense performance drops significantly. Setting the cap higher helps a lot, and memory consumption only growths by ~200 MB at max in my case.

I’ve traveled a long long way to get to this point. I saw a lot of people complaining about Intellisense performance, and nowhere did I see the trick of ramping up memory cap. I’d like to make request: could this be temporarily shared between devs in some more visible place? After all we don’t want to mess this repository, by creating another Intellisense performance complaints. Some FAQ would do.

1reaction
montantcommented, Oct 23, 2018

I’m facing the very same issue, and I have to disable Python extension for editing… python files because of it. I have no opinion on the way to go for handling it but this issue is impacting usability drastically for sure.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Intellisense speedup proposals · Issue #61574 · microsoft/vscode ...
As developers, we mostly work on decent machines (16GB at least, I think 32/64GB is standard for ML/CV applications). For me, it wouldn't...
Read more >
How can I speed up Eclipse Proposals? they are very slow
I am using Eclipse for Java development. One thing that I like with IDE's are when they propose the method names that I'm...
Read more >
IntelliSense in Visual Studio Code
The JavaScript code below illustrates IntelliSense completions. IntelliSense gives both inferred proposals and the global identifiers of the project.
Read more >
Auto Completions Speed Up in Java on Visual Studio Code
Java jockeys using Microsoft's Visual Studio Code editor will see faster code completions thanks to a new language server.
Read more >
Speed up IntelliSense for C++ in Visual Studio - HŌRU
Here are my settings to speed Visual Studio and in particular IntelliSense up for C++ development. They should make working with C++ in ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found