Question Answering demonstrator for contribute model stopped working
See original GitHub issueEnvironment info
All if this is run on the huggingface platform for contributed models.
Processing of the model in other hosts works correctly, using the versions described below. Was there an upgrade to the deployed transformers demonstration code that breaks the loading of contributed q/a models?
transformers
version: 2.2.1- Platform: ubuntu 18.04
- Python version: 3.7.7
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
Who can help
Information
Model I am using (Bert, XLNet …): Contributed model mfeb/albert-xxlarge-v2-squad2
based on Albert xxlarge v2, pretrained with SQuAD2.
The problem arises when using:
- the official example scripts: (give details below)
- my own modified scripts: (give details below)
- web demonstrator for question answering, using the contributed model
The tasks I am working on is:
- an official GLUE/SQuAD task: SQuAD 2
- my own task or dataset: (give details below)
To reproduce
Steps to reproduce the behavior:
- Visit https://huggingface.co/mfeb/albert-xxlarge-v2-squad2
- press
compute
button - See the following message:
Model name 'mfeb/albert-xxlarge-v2-squad2' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'mfeb/albert-xxlarge-v2-squad2' was a path or url to a directory containing vocabulary files named ['spiece.model'], but couldn't find such vocabulary files at this path or url.
This seems to imply that the code that is performing the run_squad is not recognizing that the model is one of the contributed models (not one of the recognized, provided models).
Expected behavior
An answer to the question: London
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Question answering - Hugging Face Course
Time to look at question answering! This task comes in many flavors, but the one we'll focus on in this section is called...
Read more >BERT Question Answering Python* Demo
This README describes the Question Answering demo application that uses a Squad-tuned BERT model for inference.
Read more >AI2 releases demo of question-answering model it claims ...
You can use the demo to explore Macaw's answers and compare them to those given by the GPT-3 language model on a benchmark...
Read more >Long Form Question Answering in Haystack
Without transformer models, the level of language comprehension required to make something as complex as QA work simply was not possible.
Read more >NLP — Building a Question Answering model
We would like each word in the context to be aware of words before it and after it. A bi-directional GRU/LSTM can help...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Yes. Working now. Thanks!
I’m closing, don’t hesitate to reopen if anything goes wrong.