Extremely slow performance and high CPU usage for fastai library
See original GitHub issueI am using Neovim 0.5.0-dev+nightly with jedi-languange-server which works well and quickly for most packages like numpy for example. However when used with the fastai library the results take sometimes 10-30 minutes to show up when prompted.
To reproduce my issue you can install fastai via miniconda3:
conda install -c fastai -c pytorch fastai
and then install Jedi-language-server:
conda install jedi-language-server
Then open a file in neovim with the conda-env enabled and jedi enabled, here is a short example file:
from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics = error_rate)
then try to bring up hover help on one of the fastai functions like ImageDataLoaders
. For me this takes several minutes and makes one of my CPUs stay at 100% for that entire time.
I’m wondering if this has something to do with the way that fastai is designed to be imported with from fastai import *
which is not typical practice for python libraries.
If there is anymore information I can give you please let me know. Thanks
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (4 by maintainers)
Just in case someone else might find this useful, the lua code that got what @HansPinckaers was describing working for me with neovim 0.5 built-in LSP was this:
The
on_attach
settings are other general settings.Hmm, I can reproduce. This looks like the Jedi issue being tracked here: https://github.com/davidhalter/jedi/issues/1721