question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Extremely slow performance and high CPU usage for fastai library

See original GitHub issue

I am using Neovim 0.5.0-dev+nightly with jedi-languange-server which works well and quickly for most packages like numpy for example. However when used with the fastai library the results take sometimes 10-30 minutes to show up when prompted.

To reproduce my issue you can install fastai via miniconda3: conda install -c fastai -c pytorch fastai and then install Jedi-language-server: conda install jedi-language-server

Then open a file in neovim with the conda-env enabled and jedi enabled, here is a short example file:

from fastai.vision.all import *

path = untar_data(URLs.PETS)/'images'

def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
        path, get_image_files(path), valid_pct=0.2, seed=42,
        label_func=is_cat, item_tfms=Resize(224))

learn = cnn_learner(dls, resnet34, metrics = error_rate)

then try to bring up hover help on one of the fastai functions like ImageDataLoaders. For me this takes several minutes and makes one of my CPUs stay at 100% for that entire time.

I’m wondering if this has something to do with the way that fastai is designed to be imported with from fastai import * which is not typical practice for python libraries.

If there is anymore information I can give you please let me know. Thanks

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:9 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
atticusmatticuscommented, Feb 28, 2021

Just in case someone else might find this useful, the lua code that got what @HansPinckaers was describing working for me with neovim 0.5 built-in LSP was this:

    require'lspconfig'.jedi_language_server.setup{
        init_options = {
            jediSettings = {
                autoImportModules = { "fastai", "fastcore" }
            }
        },
        on_attach = on_attach
    }

The on_attach settings are other general settings.

1reaction
pappasamcommented, Feb 2, 2021

Hmm, I can reproduce. This looks like the Jedi issue being tracked here: https://github.com/davidhalter/jedi/issues/1721

Read more comments on GitHub >

github_iconTop Results From Across the Web

Performance Tips and Tricks - fastai
This document will show you how to speed things up and get more out of your GPU/CPU. Mixed Precision Training. Combined FP16/FP32 training...
Read more >
Troubleshooting - | fastai
A lot of problems disappear when a fresh dedicated to fastai virtual environment is created. The following example is for using a conda...
Read more >
Performance Improvement Through Faster Software ...
This thread is dedicated to tips and tricks on improving performance of your ML/DL code, without making any changes to your code or ......
Read more >
Troubleshooting my Hardware - Part 1 (2018) - Fast.ai forums
It looks like you're only using about 1.2 GB of memory on the GPU. I don't know what your learner is but if...
Read more >
Fastai on Apple M1 - Deep Learning - fast.ai Course Forums
Is it very slow as compared to google colab? Thank you! ... So, while it is nice and fast for CPU, it is...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found