question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Pipeline Loading Models and Tokenizers

See original GitHub issue

❓ Questions & Help

Details

Hi I’m trying to use ‘fmikaelian/flaubert-base-uncased-squad’ for question answering. I understand that I should load the model and the tokenizers. I’m not sure how should I do this.

My code is basically far

` from transformers import pipeline, BertTokenizer

nlp = pipeline(‘question-answering’,
model=‘fmikaelian/flaubert-base-uncased-squad’,
tokenizer=‘fmikaelian/flaubert-base-uncased-squad’)`

Most probably this can be solve with a two liner.

Many thanks

A link to original question on Stack Overflow: https://stackoverflow.com/questions/60287465/pipeline-loading-models-and-tokenizers

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:20 (9 by maintainers)

github_iconTop GitHub Comments

3reactions
LysandreJikcommented, Mar 2, 2020

@fmikaelian That’s really cool, thanks for taking the time to fine-tune those models! I’ll look into the error with the pipeline ASAP, I’m pretty sure I know where it comes from.

Really cool to have the first community model for question answering in French!

2reactions
fmikaeliancommented, Feb 26, 2020

@rcontesti @LysandreJik

I will fine-tune FlaubertForQuestionAnsweringSimple and CamembertForQuestionAnswering on French QA in the next days and let you know if we can use the pipeline with those

Read more comments on GitHub >

github_iconTop Results From Across the Web

Pipelines - Hugging Face
The pipelines are a great and easy way to use models for inference. ... A tokenizer in charge of mapping raw textual input...
Read more >
Pipeline Loading Models and Tokenizers for Q&A
Hi I'm trying to use 'fmikaelian/flaubert-base-uncased-squad' for question answering. I understand that I should load the model and the ...
Read more >
Language Processing Pipelines · spaCy Usage Documentation
When you call nlp on a text, spaCy will tokenize it and then call each component on the Doc , in order. Since...
Read more >
ML Pipelines - Spark 3.3.1 Documentation
The Pipeline.fit() method is called on the original DataFrame , which has raw text documents and labels. The Tokenizer.transform() method splits ...
Read more >
Compiling and Deploying Pretrained HuggingFace Pipelines ...
As yo've seen above, Huggingface's pipline feature is a great wrapper for running inference on their models. It takes care of the tokenization...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found