question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

how to use local pretrained models with downstream tasks

See original GitHub issue

Hi Researchers,

Thanks for building this convenient framework.

After I pretrained my upstream from scratch, I am trying to run the downstream task. From downstream/README.md, it suggests python run_downstream.py -m train -u baseline -d example -n NameOfExp

But I can’t find how to fine-tune or pretrain the downstream linear layers and eventually verify with test data. I’ve tried python run_downstream.py -m train -u mockingjay -d phone_linear -n d1 -e result/pretrain/p1

And it failed, pretty much because I didn’t load the correct config files, when it complains I didn’t pass the “downstream_expert” parameter.

Can you kindly tell me what did I miss?

Thank You,

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
andi611commented, Mar 8, 2021

It works, thank you!

Will this information added to the README in the future? It can be helpful when example commands are given so users won’t need to worry about if they are training in an unexpected setup.

Or just list them in Makefile/shell will do it, too.

Again, thank you guys for this work.

Hi,

yes, I’ve added this information to the README. The fine-tune information in this commit: https://github.com/s3prl/s3prl/commit/e0f93626200db3c5f71385b99d7ea150b0b0afca The loading method in this commit: https://github.com/s3prl/s3prl/commit/6d2327bc6c6ccdd3e21c9c2e7b932b2952667a8e

Sorry for the lack of documentation, we will complete our documentations in the near future.

0reactions
favreicommented, Mar 8, 2021

It works, thank you!

Will this information added to the README in the future? It can be helpful when example commands are given so users won’t need to worry about if they are training in an unexpected setup.

Or just list them in Makefile/shell will do it, too.

Again, thank you guys for this work.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Models - Hugging Face
A string, the model id of a pretrained model hosted inside a model repo on ... It is up to you to train...
Read more >
NLP Deep Learning Training on Downstream tasks using ...
Download and Import the Libraries · Download the Data · Define the Pre-Trained Model · Define the Pre-Process function or Dataset Class ·...
Read more >
Hugging Face Pre-trained Models: Find the Best One for Your ...
There are two ways to start working with the Hugging Face NLP library: either using pipeline or any available pre-trained model by repurposing...
Read more >
How to load the pre-trained BERT model from local/colab ...
You are using the Transformers library from HuggingFace. ... You can import the pre-trained bert model by using the below lines of code:...
Read more >
Downstream Task Performance of BERT Models Pre-Trained ...
model. The impact of the de-identification techniques is assessed by training and evaluating the models using six clinical downstream tasks.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found