Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issue with class-incremental SI and LwF

See original GitHub issue


We are currently trying to use Avalanche in our research, as it looks like an amazing tool providing a lot of ready-to-use tools. However, we encountered some issues that stopped us from moving further.

Our goal is to work with class-incremental scenarios. We experimented on MNIST, built in two different ways - using SplitMNIST benchmark and building a benchmark leveraging nc_benchmark method.

    scenario_toggle = 'nc_MNIST'        # 'nc_MNIST' (nc_benchmark) or 'splitMNIST' (SplitMNIST)
    task_labels = False
    if scenario_toggle == 'splitMNIST':
        scenario = SplitMNIST(n_experiences=5, return_task_id=task_labels, fixed_class_order=list(range(10)))
    elif scenario_toggle == 'nc_MNIST':
        train = MNIST(root=f'data', download=True, train=True, transform=train_transform)
        test = MNIST(root=f'data', download=True, train=False, transform=test_transform)
        scenario = nc_benchmark(
            train, train, n_experiences=5, shuffle=False, seed=1234,
            task_labels=task_labels, fixed_class_order=list(range(10))

We tried using two strategies: LwF and SI. We tried both values of scenario_toggle: splitMNIST and nc_MNIST. However, the evaluation results in both cases suggest that only the last experience is remembered and recognized. All other experiences have an accuracy equal to 0.00, which is unexpected and suggests that something is wrong.

Sample results:


Similar behavior can be observed for all combinations (splitMNIST, nc_MNIST combined with LwF and EWC) when task_labels = False. When we change the task_labels to True, the results start to make sense with values between 0.6 and 1 for all previously learned experiences.

We are not sure whether the problem is in our approach, our code, or maybe if there is some bug impacting our results. Therefore, we have a few questions:

  1. Is our approach valid? Is setting task_labels to False equal to creating class-incremental benchmarks? And task_labels = True brings task-incremental scenario?
  2. Is there any reason why the results look like that? Is it the issue with how we use benchmarks?

We will appreciate any suggestions, as we have already spent some time with Avalanche and we would love to leverage all the tools it provides.

I am providing the minimal test project we prepared.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:16

github_iconTop GitHub Comments

AndreaCossucommented, Jul 4, 2022
  1. In class-incremental scenarios you do not have task label available, hence you either work with a single-headed model or you work with a multi-head model but you have to actively infer task labels, since the environment will not provide that.
  2. The target range depends on the head, if you use a single head you need targets in the range 0, n_classes-1 (0-9 for split MNIST). If you use a multihead you have one linear classifier for each head, hence you need targets in the range 0,n_units_per_head-1 (0-1 for split MNIST with 5 heads and 2 units per head).
  3. To work in class-incremental you can just set task_labels=False in both SplitMNIST and nc_benchmark. Task labels will always be 0 for each experience and targets will be in the range 0-9.

Hope this helps 😄

HamedHematicommented, Jun 27, 2022

Follow-up comment on @AntonioCarta’s: Here is a paper showing that SI and LwF almost fail in class-incremental scenarios for Split-MNIST:

Screenshot 2022-06-27 at 14 30 29

By changing your architecture to the one used in the CL baselines repository, you may get a small increase in the average accuracy (better than complete forgetting) for LwF.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Maintaining Discrimination and Fairness in Class Incremental ...
EWC [18], SI [35], and MAS [1] attempt to solve this problem with a parameter control strategy. Knowledge distillation (KD) [12] is another...
Read more >
A continual learning survey: Defying forgetting in classification ...
Abstract—Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through.
Read more >
Class-Incremental Learning via Dual Augmentation
On the other hand, to overcome the classifier bias, semanAug implicitly involves the si- multaneous generating of an infinite number of instances of...
Read more >
RMM: Reinforced Memory Management for Class-Incremental ...
Even though this is a non-stationary reinforcement learning problem, we ... Specifically, the learning objective of π(ai|si) is to maximize the expected.
Read more >
Brain-inspired replay for continual learning with artificial ...
However, scaling up generative replay to complicated problems with many tasks ... LwF learning without forgetting, SI synaptic intelligence, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found