question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Lack of clarity over training time taken to run Replay strategy vs Naive strategy.

See original GitHub issue

Hello,

First of all, thank you for your fantastic work. Avalanche is a pleasure to work with.

I am currently working on a basic Replay strategy implementation for a project at University. During my exploration, however, I came across a curious phenomenon when training over experiences in a class incremental scenario (i.e previously unseen classes appear in each new experience) using MNIST. I was hoping you could explain what is happening. To provide clarity, I’ll describe what I observed and also attach pictures for support.

To start off, here’s a bit of context about my project setting:

Experiment Configuration

  • Chosen Dataset: MNIST
  • Chosen Model: a Simple MLP
  • Chosen Scenario: Class Incremental Scenario
  • Number of Experiences: 5 (expect 2 classes to be learned per experience) --> ~10k samples per experience
  • Number of Epochs per Experience: 1 (for debugging purposes)
  • Chosen Strategies: Naive EWC (as baseline) & Replay (buffer_size = 100)

My main training loop looks as follows:

for experience in CI_MNIST_Scenario.train_stream:
       print("Start of experience: ", experience.current_experience)
       print("Current Classes: ", experience.classes_in_this_experience)


        # train returns a dictionary which contains all the metric values
        res = cl_strategy.train(experience)
        print("Training completed")

print("Computing accuracy on the whole test set")
# test also returns a dictionary which contains all the metric values 
results.append(cl_strategy.eval(CI_MNIST_Scenario.test_stream))

Observations when implementing a Naive EWC Strategy

When running the basic training loop found above, I find that training time over each experience is relatively consistent. For a relatively equal amount of samples per experience, it roughly takes ~3s to train over an epoch. No problem here, I expect this behaviour.

image

Observations when running Replay Strategy (buffer size of 100)

It is when running the Replay strategy using the same training loop that I observe unexpected behaviour. Consider the following screenshot that displays the results of a few iterations of my training loop.

image

Here’s the thing I don’t understand: why does the training time over each experience increase exponentially? As far as I am aware, each experience still holds roughly the same number of samples to be learned (~10k). The only modification regarding training data in our setting is the introduction of a buffer (of size 100). In my view, compared to Naive EWC, the increase in training time using a buffer can only be marginal.

Here’s a breakdown of training times using this strategy: Exp_0: 3s per epoch Exp_1: 8s per epoch Exp_2: 18s per epoch Exp_3: 23s per epoch Exp_4: 45s per epoch

Any indications/explanations would be much appreciated! I am relatively new to Continual Learning, so apologies if I’m missing something which is obvious.

Again, thank you very much for your work.

Best,

Kevin

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6

github_iconTop GitHub Comments

1reaction
HamedHematicommented, May 4, 2022

Hi @PabloMese In the new code structure, supervised strategies can be accessed via: avalanche.training.supervised

Please note that since the code is consistently changing, some existing examples/codes may not work with the master branch. We are going to have a new release with new examples and API documentation soon! 😃

0reactions
PabloMesecommented, May 4, 2022

Hi, @KevinG1002, @HamedHemati, I’m also installing the latest version with this line of code you shared. I can install it successfully, but when running the code this silly error raises. I can’t find how the strategies module is now named.

ModuleNotFoundError: No module named 'avalanche.training.strategies'

Read more comments on GitHub >

github_iconTop Results From Across the Web

Recurrent Experience Replay in Distributed Reinforcement ...
Abstract: Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL ...
Read more >
Optimism and pessimism in optimised replay - PMC - NCBI - NIH
During periods of quiet restfulness and sleep, when humans and other animals are not actively engaged in calculating or executing the immediate solutions...
Read more >
A continual learning survey: Defying forgetting in classification ...
The major challenge is to learn with- out catastrophic forgetting: performance on a previously learned task or domain should not significantly ...
Read more >
Reinforcement Learning for Robots Using Neural Networks
Experience replay is a simple technique that implements this idea, and is shown to be effective in reducing the number of action executions ......
Read more >
OptaPlanner User Guide
This guide walks you through the process of creating a simple Java application with ... INFO Solving ended: time spent (5000), best score...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found