question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Questions about self-supervised learning on cifar10

See original GitHub issue

Thanks for sharing the codes! This work is really interesting to me. My questions are as follows:

I’m trying to reproduce the results in Table 2. Specifically, I trained the models with/without self-supervised pre-training (SSP). However, the baselines (w.o. SSP) consistently outperform those with SSP under different training rules (including None, Resample, and Reweight). The best precisions are presented below. For each experimental setting, I run twice to see if the results are stable, so there’re two numbers per cell.

image

For your reference, I used the following commands:

  • Train Rotation
python pretrain_rot.py --dataset cifar10  --imb_factor 0.01 --arch resnet32
  • Train baseline
python train.py --dataset cifar10 --imb_factor 0.01 --arch resnet32 --train_rule None 
  • Train baseline + SSP
python train.py --dataset cifar10 --imb_factor 0.01 --arch resnet32 --train_rule None --pretrained_model xxx 

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
YyzHarrycommented, Oct 21, 2020

the batch size is set to 256 while the default batch size is 128 in DRW source code

That’s a good catch. I just checked the setting when I ran the experiments, and the batch size I used is 128 on CIFAR. Thought there might be some inconsistency on the default value when I clean up the code (already updated it).

Regarding your questions, I just quickly ran three experiments. For baseline on CIFAR-10-LT with None, I got 71.03%. By using None + SSP, I got 73.99%. For DRW + SSP, I got 77.45%, which is even slightly higher than the number reported in our paper. I’m using the Rotation checkpoint I provided in this repo, which has 83.07% test acuracy, similar to yours. I also checked your log, which seems fine to me. So currently I’m not sure what causes the difference. I would suggest you try using the Rotation SSP checkpoint I provided, to see if there’s any difference.

Otherwise, you may want to check whether the pre-trained weights are loaded correctly, or the exact training setting such as PyTorch version (1.4 for this repo), or how many GPUs used (for CIFAR experiments only 1), etc.

0reactions
87nohighercommented, Oct 30, 2021

I met the same problem. Did you solve the question? How did you solve it? Thanks

Thanks for the help! Let me try these ideas.

Read more comments on GitHub >

github_iconTop Results From Across the Web

BYOL tutorial: self-supervised learning on CIFAR images with ...
Implement and understand byol, a self-supervised computer vision method without negative samples. Learn how BYOL learns robust ...
Read more >
Self-Supervised Learning for Image Classification. - Medium
In this post we explore the benefits of applying self-supervised learning to the image classification problem in computer vision.
Read more >
Contrastive Self-Supervised Learning on CIFAR-10 - GitHub
This repository is used to verify how data augmentations will affect the performance of contrastive self-supervised learning algorithms.
Read more >
Is Self-Supervised Contrastive Learning More Robust Than ...
Self -supervised contrastive learning is a power- ful tool to learn visual representation without la- bels. Prior work has primarily focused on the....
Read more >
Using Self-Supervised Learning Can ... - ACM Digital Library
Self -supervision provides effective representations for downstream tasks without requiring labels. However, existing approaches lag behind fully supervised ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found