question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Language server caching over-disposes LS instances, breaks on interpreter change

See original GitHub issue

#8815 introduced LS caching and ref counting, such that the interactive window can reference the same language server as the regular code and keep it around until it’s no longer needed. The keys are resource scoped, which are good for ensuring things are shared and the minimum number of LSs are spawned.

Unfortunately, the activeServer and resource properties are essentially “globals” to the extension as a whole. The current code kills LSs as the user moves around different files to save resources, however this breaks multi-root workspaces which rely on being able to navigate around and still get code completions. (e.g., I open two files, one in each workspace, and switch between them)

Additionally, I believe the current code over-disposes, leading to a scenario where on the first active LS change, the next spawned server cannot be killed and isn’t configured correctly, leading both to https://github.com/microsoft/vscode-python/issues/5132#issuecomment-603571156 and a related bug where the next-spawned server’s interpreter doesn’t change.

The solution here is to eliminate activeServer and resource and make LSs fully resource-scoped (only accessible by key). Once a LS is no longer referenced (no handles from the interactive window, and its workspace no longer exists), the LS can be thrown away.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:9
  • Comments:8 (1 by maintainers)

github_iconTop GitHub Comments

3reactions
nfrassercommented, Jul 17, 2020

Has anyone tried the new Pylance extension? So far it seems to be working with multi-root workspaces for me.

2reactions
ErwanDLcommented, May 20, 2020

Are there any advancements on this bug ? This is actually a major painpoint for me, as it prevents me from being able to use MLPS on my main Python project at work (which uses multi-root workspaces).

Unless you deem this is a really advanced bugfix, I would love to help on it. Any guidance on where to start working ? 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

Multi-root workspace support prevents extension from ... - GitHub
Use a single language server instance when type set to Node #11315. Merged. Language server caching over-disposes LS instances, breaks on ...
Read more >
Changing the caching behavior of your Amazon Lightsail ...
Specify the following to cache all files in the document root of an Apache web server running on a Lightsail instance. var/www/html/. Specify ......
Read more >
VS Code Julia language-server caching?
I currently have to use VS Code with a rather heavy/large Julia environment. The first time the language-server started, it needed ages ...
Read more >
Reading and writing data to the cache - Apollo GraphQL Docs
Any changes you make to cached data with writeFragment are not pushed to your GraphQL server. If you reload your environment, these changes...
Read more >
Troubleshooting Guide | LSCache for WordPress
Troubleshooting the LiteSpeed Cache for WordPress Plugin. How to fix common problems.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found