question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItΒ collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

GDumb and data augmentation

See original GitHub issue

Hi! I’m testing some strategies to reach the accuracy values stated in the original papers. In particular, Gdumb with mem_size=500 reaches 90% of accuracy in the split-mnist benchmark. For the parameters I use the original implementation of GDumb: https://github.com/drimpossible/GDumb

πŸ› Describe the bug There is a drop (~30%) in accuracy using data augmentation. This problem doesn’t appear using other strategies (with the same settings: neural network, parameters, regularization etc.) .

🐜 To Reproduce

train_transform = Compose([
        #RandomCrop(28, padding=4),
        ToTensor(),
        Normalize((0.1307,), (0.3081,))
    ])
eval_transform = Compose([
        ToTensor(),
        Normalize((0.1307,), (0.3081,))
    ])

scenario = SplitMNIST(
        n_experiences=5, fixed_class_order=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
        seed=1234, train_transform=train_transform, eval_transform=eval_transform)

🐝 Expected behavior Data augmentation should help to achieve better results despite the strategy.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:12 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
AntonioCartacommented, Sep 27, 2021

@gab709 I did some experiments. I don’t think there is any bug here. The problem is that you are using an MLP. If you use a CNN everything works as expected.

Here are the results using a LeNet-like CNN:

┏━━━━━━━━━━┳━━━━━━━━━━━━━━┓
┃ Exp/Task ┃ Top1_Acc_Exp ┃
┑━━━━━━━━━━╇━━━━━━━━━━━━━━┩
β”‚ E0T0     β”‚ 0.8178       β”‚
β”‚ E1T0     β”‚ 0.9759       β”‚
β”‚ E2T0     β”‚ 0.7593       β”‚
β”‚ E3T0     β”‚ 0.9310       β”‚
β”‚ E4T0     β”‚ 0.6485       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
1reaction
AntonioCartacommented, Jul 1, 2021

@gab709 if you want to look into this, I can support you. Just ping me on slack if you need help.

Otherwise, I will need the complete script to reproduce the error.

Read more comments on GitHub >

github_iconTop Results From Across the Web

GDumb: A Simple Approach that Questions Our Progress in ...
SGDR, along with standard data augmentation is used. GDumb uses cutmix [46] with p=0.5 and Ξ±=1.0 for regularization on all datasets except MNIST....
Read more >
GDumb: A Simple Approach that Questions Our Progress in ...
SGDR, along with standard data augmentation is used. GDumb uses cutmix [46] with p=0.5 and Ξ±=1.0 for regularization on all datasets except MNIST....
Read more >
Class-Incremental Learning via Dual Augmentation
Data Augmentation. Literature is rich on data augmentation for improving the generalization of ... Gdumb: A simple approach that questions our.
Read more >
GDumb: A Simple Approach that Questions ... - ResearchGate
Termed Greedy Sampler and Dumb Learner [31] randomly samples data ... Augmented Geometric Distillation for Data-Free Incremental Person ReID.
Read more >
arXiv:2104.05025v3 [cs.LG] 2 May 2022
GDUMB Prabhu et al. (2020) performs offline training on the buffer with unlimited computation and unrestricted use of data augmentation atΒ ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found