Test QAPipeline with GPU
See original GitHub issue@fmikaelian
I have implemented the changes as discussed. I tested the fit()
method for the retriever and the predict()
method in a notebook included in examples. Everything is working fine.
Could you please test the fit()
method for the reader in GPU to check if everything is ok?
Please note that the current implementation of QAPipeline
still uses the TfidfRetriever
as you have implemented (passing the dataframe column as input to TfidfRetriever.fit(df['content'])
). It should be changed once you will have implemented the improvement I proposed in #95.
_Originally posted by @andrelmfarias in https://github.com/fmikaelian/cdQA/pull/101#issuecomment-488628180_
Issue Analytics
- State:
- Created 4 years ago
- Reactions:3
- Comments:28
Top Results From Across the Web
Pipelines - Hugging Face
We're on a journey to advance and democratize artificial intelligence through open source and open science.
Read more >Tutorial: How to Use Pipelines - Haystack - Deepset
You can double check whether the GPU runtime is enabled with the ... import Pipeline # Custom built extractive QA pipeline p_extractive =...
Read more >Test Drive GPU-Accelerated Servers - NVIDIA
Accelerate your most demanding analytics, high-performance computing (HPC), inference, and training workloads with a free test drive of NVIDIA data center ...
Read more >Question Answering through BERT in 10 steps - Numpy Ninja
2) CDQA also has QAPipeline whereinto the documents will be fitted ... Remember if you have lots of data do not forget to...
Read more >Long Form Question Answering in Haystack - Pinecone
An open-book abstractive QA pipeline looks like this: ... First, we check that we are using the GPU, as this will make the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Using a pre-trained reader model on GPU:
I tested
BertQA.fit()
with the following code on Colab:And got the following error:
The problem occurs in https://github.com/fmikaelian/cdQA/blob/0dce89f48ab53a69e8fdb8b76f39029f465f5bbc/cdqa/reader/bertqa_sklearn.py#L1165
It probably comes from the direct adaptation of
run_squad.py
from Hugging Face to ourBertQA
class:https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_squad.py#L1036
For now, I will delete the line in the
fit()
method ofBertQA
. If needed, we might include this saving in theBertProcessor
class (when the attributeis_training
isTrue
), where we process the text and create a vocabulary.