question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Test a fine-tuned BERT-QA model

See original GitHub issue

I have fine-tuned a BERT-QA model on SQuAD and it produced a pytorch_model.bin file. Now, I want to load this fine-tuned model and evaluate on SQuAD. How can I do that? I am using the run_squad.py script.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
wasiahmadcommented, Apr 18, 2019

I noticed the following snippet in the code. (which I have edited to solve my problem)

if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
    # Save a trained model, configuration and tokenizer
    model_to_save = model.module if hasattr(model, 'module') else model  # Only save the model it-self

    # If we save using the predefined names, we can load using `from_pretrained`
    output_model_file = os.path.join(args.output_dir, WEIGHTS_NAME)
    output_config_file = os.path.join(args.output_dir, CONFIG_NAME)

    torch.save(model_to_save.state_dict(), output_model_file)
    model_to_save.config.to_json_file(output_config_file)
    tokenizer.save_vocabulary(args.output_dir)

    # Load a trained model and vocabulary that you have fine-tuned
    model = BertForQuestionAnswering.from_pretrained(args.output_dir)
    tokenizer = BertTokenizer.from_pretrained(args.output_dir, do_lower_case=args.do_lower_case)
else:
    model = BertForQuestionAnswering.from_pretrained(args.bert_model)

So, if we want to load the fine-tuned model only for prediction, need to load it from args.output_dir. But the current code loads from args.bert_model when we use squad.py only for prediction.

0reactions
smartcatdogcommented, Sep 28, 2019

@Swathygsb https://github.com/kamalkraj/BERT-SQuAD inference on bert-squad model

thx for your sharing, and there is inference on bert-squad model by tensorflow? 3Q~

Read more comments on GitHub >

github_iconTop Results From Across the Web

Question Answering with a Fine-Tuned BERT - Chris McCormick
In the example code below, we'll be downloading a model that's already been fine-tuned for question answering, and try it out on our...
Read more >
Question Answering with a fine-tuned BERT | Chetna | Medium
Not bad at all. In fact, our BERT model gave a more detailed response. Here, is a small function to test out how...
Read more >
Question answering - Hugging Face Course
We will fine-tune a BERT model on the SQuAD dataset, which consists of questions posed by ... You can find it and double-check...
Read more >
Bert Fine Tune for Question Answering | by mustafac - Medium
Now I will try to show how we can fine tune Bert for QA. I found an open health set ... Just check...
Read more >
Step #1: Train the Bert QA Model — chatbot 1.0 documentation
Fine-Tune the BERT QA Model¶ ... structure inside the triton directory, which you can check inside the same Jupyter notebook terminal.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found