[RFC] Deprecate the `unfreeze_milestones` finetuning strategy?
See original GitHub issueMotivation
The unfreeze_milestones finetuning strategy is confusing:
- how is the layer number interpretted? Does this include e.g. batch norm and non-linearity layers? Not documented
- what’s the use case? Not aware of any time this would be recommended (also not documented)
Alternatives
At least document the answers to the above questions if there are any.
Issue Analytics
- State:
- Created a year ago
- Comments:5 (4 by maintainers)
Top Results From Across the Web
Revisiting Fine-tuning Strategy for Few-shot Learning - arXiv
Abstract: The goal of few-shot learning is to learn a classifier that can recognize unseen classes from limited support data with labels.
Read more >RFC 8996: Deprecating TLS 1.0 and TLS 1.1 - » RFC Editor
Deprecating TLS 1.0 and TLS 1.1. Abstract. This document formally deprecates Transport Layer Security (TLS) versions 1.0 (RFC 2246) and.
Read more >Finetuning — Flash documentation
Finetuning (or transfer-learning) is the process of tweaking a model trained ... Choose a finetune strategy (example: “freeze”) and call flash.core.trainer.
Read more >Transfer learning and fine-tuning | TensorFlow Core
To learn how to use non-trainable weights in your own custom layers, see the guide to writing new layers from scratch.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Great. Sounds good. I will try to get some work done on this then.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.