question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Fine-tuning to other datasets using the same self-supervised paradigm

See original GitHub issue

Thanks for providing such an awesome repository.

I am trying to transfer the learning ability of DINO to other datasets so that an excellent k-NN classifier can be learnt without heavy human annotations. Currently, I find the full checkpoint of ViT-S/16 does not include information of DINO head, which is important for fine-tuning in my experiments.

Is it possible to open-source the DINO head? Could you offer some suggestions for fine-tuning?

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (1 by maintainers)

github_iconTop GitHub Comments

18reactions
mathildecaron31commented, Jul 15, 2021

Hi @kaleidoscopical The head is present in the full ckpt links: https://github.com/facebookresearch/dino#pretrained-models

You can start from these weights by downloading the full checkpoint and renaming it as checkpoint.pth into your experiment repository. For example for ViT-S/16:

step1: create experiment repo mkdir ssl_finetuning

step2: download full pretrained checkpoint and move it into your experiment repo wget https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain_full_checkpoint.pth ssl_finetuning/checkpoint.pth

step3: launch dino training, this will start from checkpoint.pth located in output_dir python -m torch.distributed.launch --nproc_per_node=8 main_dino.py --arch vit_small --data_path /path/to/imagenet/train --output_dir ssl_finetuning

Hope that helps.

2reactions
kaleidoscopicalcommented, Jun 30, 2021

I wish to fine-tune my data in a self-supervised manner, so the head is necessary.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Mix-and-Match Tuning for Self-Supervised Semantic ...
Our paradigm follows the standard practice in existing self-supervised studies and no extra data or la- bel is required. With the proposed M&M...
Read more >
Event-former: A Self-supervised Learning Paradigm for ...
We propose a new paradigm for self-supervised learning for multivariate ... be fine-tuned on a potentially much smaller event dataset, similar to other...
Read more >
arXiv:2012.06908v2 [cs.LG] 29 Mar 2021
subsequent fine-tuning on other visual classification datasets ... in [9] uses only a self-supervised objective called “masked.
Read more >
Reinforcement Learning as a fine-tuning paradigm
To train agents on real-world data, why don't we simply endow them with knowledge about the real world, and let the RL algorithms...
Read more >
Differentially private deep learning can be effective with self ...
Self -supervised learning is a paradigm which leverages unlabeled data to learn ... Fine-Tuning Self-Supervised Models With DP-Optimization.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found