Ability to use cdqa on CPU
See original GitHub issueThe self.device
parameter controls wether to use gpu or cpu. To enable cpu only, it seems we need to use the parameter no_cuda
of the model.
Issue Analytics
- State:
- Created 4 years ago
- Comments:9
Top Results From Across the Web
cdQA: Closed Domain Question Answering
We believe that everyone should be able to use modern search technologies to find information in their own documents. A new way to...
Read more >Question answering using CDQA(BERT) Atos Big data - Kaggle
The challenge is here to apply CDQA on a resource limited environment. ... In our model P(t) is given by P(t) = P...
Read more >How to create your own Question-Answering system easily ...
cdQA : an easy-to-use python package to implement a QA pipeline ... When using the CPU version of the model, each prediction takes...
Read more >Training module for CDQA - LinkedIn
You will be able to find the right contacts and their accurate and precise data, helping you stay ahead of the competition. Quality...
Read more >Chapter 4. Handling large data on a single computer
But even when you can perform your analysis, you should take care of issues such as I/O (input/output) and CPU starvation, because these...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I just released the CPU version of the model in both pytorch binary version and joblib version with sklearn wrapper.
The error occurs when we try to load a model that was trained using
apex
, using the followingmodel = load('../models/bert_qa_squad_v1.1_sklearn.joblib')
What I understand is that when we save a model that was trained using
apex
, it will also save theapex
configurations (eg. the use of apex’s BertLayerNorm in the model architecture instead of PyTorch’s standard LayerNorm, as explained on the issue referenced above). So when we load it, it will look for apex configurations, which cannot be found in a machine with no GPU.After searching for a solution and reading the issues about it, the only solution I can think of for now is to retrain a model without
apex
and save it.