Allow for model evaluation directly from cdqa
See original GitHub issueThe idea is to implement the evaluate.py
script inside the package under /utils
Issue Analytics
- State:
- Created 4 years ago
- Comments:17
Top Results From Across the Web
Question answering using CDQA(BERT) Atos Big data - Kaggle
For phages, presuming they act by modulating the bacterial community, the animal model must harbor the relevant bacterial host species, and thus ...
Read more >Question and Answering With Bert | Towards Data Science
Let's first find a model to use. We head over to huggingface.co/models and click on Question-Answering to the left.
Read more >Building a Native French Question-Answering Dataset
or Language Modeling – which can be tackled in a self- ... cdQA-annotator allows for direct selection of the answer.
Read more >Deep learning based question answering system in Bengali
We depicted our workflow in Figure 1, We also evaluate our model on a ... learning where pretrained models were evaluated directly on...
Read more >How to create your own Question-Answering system ... - Morioh
The cdQA-suite was built to enable anyone who wants to build a ... datasets for model evaluation and fine-tuning; cdQA-ui: a user-interface that...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I think it just should take as input the output of
QAPipeline.predict()
and we will need to run the code:Didn’t we agree on #135 to create a method
prepare_evaluation()
in https://github.com/fmikaelian/cdQA/blob/develop/cdqa/utils/metrics.py to handle this instead of doing it directly inQAPipeline.predict()
?