LM fine tuning on top of a custom model
See original GitHub issueCurrently the finetune_on_pregenerated.py script allows only fine-tuning on top of one of the five pretrained bert models. However I don’t understand why there is such a restriction. I am trying to finetune a LM on top of a custom bert model (mt-dnn). Of course I can just remove the choices, but I am wondering if there is some rationale behind this, as none of the other example scripts contains this restriction.
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (6 by maintainers)
Top Results From Across the Web
Fine-tune a pretrained model - Hugging Face
This is known as fine-tuning, an incredibly powerful training technique. In this tutorial, you will fine-tune a pretrained model with a deep learning ......
Read more >Adding Custom Layers on Top of a Hugging Face Model
Learn how to extract the hidden states from a Hugging Face model body, modify/add task-specific layers on top of it and train the...
Read more >Fine-Tuning with Hugging Face Trainer - YouTube
In this tutorial I explain how I was using Hugging Face Trainer with PyTorch to fine - tune LayoutLMv2 model for data extraction...
Read more >Tutorial 2- Fine Tuning Pretrained Model On Custom Dataset ...
github: https://github.com/krishnaik06/Huggingfacetransformer In this tutorial, we will show you how to fine - tune a pretrained model from ...
Read more >BERT Fine-Tuning Tutorial with PyTorch - Chris McCormick
We'll be using BertForSequenceClassification. This is the normal BERT model with an added single linear layer on top for classification that we ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

is this change applicable and working with run_lm_finetuning.py in 2.x?
Indeed, I’ll take care of it. I’m on the repo now anyway.