How to finetune the 384 models with window size 12?
See original GitHub issueGreat works! I have some questions about fine-tuning on ImageNet 1K. In the paper, you claimed 384^2 input models are obtained by fine-tuning as also pointed by #24:
For other resolutions such as 384^2, we fine-tune the models trained at 224^2 resolution, instead of training from scratch, to reduce GPU consumption.
I see that you use window_size 12 for 384 models which makes the fine-tuning confusing because of the existence of parameters: relative_position_bias_table
and attn_mask
. Do you use interpolation for this issue? Which interpolation method do you use? bicubic?
Thanks for your reply in advance!
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Transfer learning and fine-tuning | TensorFlow Core
Fine-Tuning : Unfreeze a few of the top layers of a frozen model base and jointly train both the newly-added classifier layers and...
Read more >Vision Outlooker for Visual Recognition - VOLO - arXiv Vanity
Finetuning on 384 resolution further improves the performance to 85.2%, which is clearly better than all the models with comparable amount of training ......
Read more >Room and Window Dimensions
Window Size. Suites 300, 400 ... 12'10” x 12'6”, alcove 9'3” x 8'. 3. 4'6” x 8'. Rooms 244, 344, 444. 9'11” x...
Read more >Global Filter Networks for Image Classification - arXiv
All of our models are trained with 224 × 224 images. We use “↑384" to represent models finetuned on 384 × 384 images...
Read more >Finetuning Torchvision Models - PyTorch
In finetuning, we start with a pretrained model and update all of the ... Number of classes in the dataset num_classes = 2...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
bicubic also
Instructions and configs for fine-tuning on higher resolution can be found here: https://github.com/microsoft/Swin-Transformer/blob/main/get_started.md#fine-tuning-on-higher-resolution