question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Can't cache fairseq model

See original GitHub issue

Summary

I’m trying to cache a RoBERTa model which i’m using for a masked language model task, as I don’t want it to be loaded in everytime something changes on the page. I’m trying to use @st.cache for this end, but it fails with the following error trace:

AttributeError: 'function' object has no attribute 'memo'

Streamlit encountered an error while caching the body of load_model(). This is likely due to a bug in /groups/wall2-ilabt-iminds-be/nlp-students/users/sdeblanc/mt2021_streamlit/src/fairseq/fairseq/utils.py near line 455:


if module_path not in import_user_module.memo:
    import_user_module.memo.add(module_path)
Please modify the code above to address this.

If you think this is actually a Streamlit bug, you may file a bug report here.

Traceback:
File "/groups/wall2-ilabt-iminds-be/nlp-students/users/sdeblanc/mt2021_streamlit/WebApp.py", line 25, in <module>
    def load_model(path, modelname):

Steps to reproduce

Environment is reproducable with the following requirements.txt file

pandas
nltk
scikit-learn
pillow
streamlit
torch 
torchvision
-e git+https://github.com/pytorch/fairseq.git#egg=fairseq

Then I tried to load in the model as follows:

# Load in the model
@st.cache()
def load_model(path, modelname):
    model = RobertaModel.from_pretrained(path, checkpoint_file=modelname)
    model.eval()
    return model

model = load_model('../models/robbert/','RobBERT-base.pt')

Expected behavior:

The model should not be reloaded everytime something to the page changes

Actual behavior:

An error is displayed upon loading in the page.

Is this a regression?

No

Debug info

  • Streamlit version: 0.70.0
  • Python version: 3.8.6
  • Using Conda
  • OS version: Ubuntu 20
  • Browser version: Google Chrome

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:1
  • Comments:6

github_iconTop GitHub Comments

2reactions
sdblanccommented, Nov 6, 2020

If it makes it any easier for you, I published my basic skeleton on this repository : https://github.com/sdblanc/mt2021_streamlit . When you run it, you’ll see that the model shows the prediction for the sentence ‘Het meisje <mask> daar loopt.’ without any problems. If you then enter the same text within the text field and click ‘Predict’ , it will take quite a long time to make the same prediction, and I think this is because the model is loaded again (as this waiting time is not present in a similar notebook). Using caching, I won’t to mitigate this waiting time.

0reactions
jadore801120commented, Mar 7, 2022

I think it is solved with the experimental_singleton decorator now.

@st.experimental_singleton
def load_model(path, modelname):
    model = RobertaModel.from_pretrained(path, checkpoint_file=modelname)
    model.eval()
    return model
Read more comments on GitHub >

github_iconTop Results From Across the Web

Command-line Tools — fairseq 0.12.2 documentation
Fairseq provides several command-line tools for training and evaluating models: ... how often to clear the PyTorch CUDA cache (0 to disable). Default:...
Read more >
fairseq/fairseq/file_utils.py · sriramelango/ ...
Tries to cache the specified URL using PathManager class. Returns the cached path if success otherwise failure. """ try:.
Read more >
arXiv:2106.04718v2 [cs.CL] 12 Jul 2021
tion cache optimization, an efficient algorithm for detecting repeated n-grams, ... models in FairSeq and HuggingFace-Transformers.
Read more >
cannot install fairseq using Anaconda? - python
Thanks so much! (base) PS C:\WINDOWS\system32> pip install fairseq --user Collecting fairseq Using cached fairseq-0.10.1 ...
Read more >
Why is Pytorch code slower than libraries like fairseq and ...
I have a made Transformer model using torch.nn.Transformer and it has ... Unless you share your code we can't help you diagnose the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found