question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

GDumb memory update

See original GitHub issue

GDumb does not remove samples when the number of classes increases.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6

github_iconTop GitHub Comments

1reaction
AndreaCossucommented, Apr 2, 2021

Ok, the last error has nothing to do with GDumb and it appears to be a bug in SplitFMnist. I will create a new issue to track it and close this as soon as GDumb is ready.

0reactions
AndreaCossucommented, Apr 2, 2021

This is the error raised with GDumb (after the callback name modification) when using scenario = SplitFMnist(5). I noticed that this error is not raised with SplitMNIST, though.

Traceback (most recent call last):
  File "/home/cossu/avalanche/examples/ewc_mnist.py", line 92, in <module>
    main(args)
  File "/home/cossu/avalanche/examples/ewc_mnist.py", line 63, in main
    strategy.train(experience)
  File "/home/cossu/avalanche/avalanche/training/strategies/base_strategy.py", line 249, in train
    self.train_exp(exp, eval_streams, **kwargs)
  File "/home/cossu/avalanche/avalanche/training/strategies/base_strategy.py", line 272, in train_exp
    self.after_train_dataset_adaptation(**kwargs)
  File "/home/cossu/avalanche/avalanche/training/strategies/base_strategy.py", line 400, in after_train_dataset_adaptation
    p.after_train_dataset_adaptation(self, **kwargs)
  File "/home/cossu/avalanche/avalanche/training/plugins/gdumb.py", line 41, in after_train_dataset_adaptation
    for i, (pattern, target_value, _) in enumerate(dataset):
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 306, in __getitem__
    return TupleTLabel(manage_advanced_indexing(
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/dataset_utils.py", line 320, in manage_advanced_indexing
    single_element = single_element_getter(int(single_idx))
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 1035, in _get_single_item
    return self._process_pattern(self._dataset[idx], idx)
  File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.9/site-packages/torch/utils/data/dataset.py", line 272, in __getitem__
    return self.dataset[self.indices[idx]]
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 306, in __getitem__
    return TupleTLabel(manage_advanced_indexing(
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/dataset_utils.py", line 320, in manage_advanced_indexing
    single_element = single_element_getter(int(single_idx))
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 659, in _get_single_item
    return self._process_pattern(self._dataset[idx], idx)
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 306, in __getitem__
    return TupleTLabel(manage_advanced_indexing(
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/dataset_utils.py", line 320, in manage_advanced_indexing
    single_element = single_element_getter(int(single_idx))
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 1035, in _get_single_item
    return self._process_pattern(self._dataset[idx], idx)
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/dataset_utils.py", line 207, in __getitem__
    result = super().__getitem__(idx)
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/dataset_utils.py", line 184, in __getitem__
    return self.dataset[self.indices[idx]]
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 306, in __getitem__
    return TupleTLabel(manage_advanced_indexing(
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/dataset_utils.py", line 320, in manage_advanced_indexing
    single_element = single_element_getter(int(single_idx))
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 659, in _get_single_item
    return self._process_pattern(self._dataset[idx], idx)
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 306, in __getitem__
    return TupleTLabel(manage_advanced_indexing(
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/dataset_utils.py", line 320, in manage_advanced_indexing
    single_element = single_element_getter(int(single_idx))
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 659, in _get_single_item
    return self._process_pattern(self._dataset[idx], idx)
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 669, in _process_pattern
    pattern, label = self._apply_transforms(pattern, label)
  File "/home/cossu/avalanche/avalanche/benchmarks/utils/avalanche_dataset.py", line 680, in _apply_transforms
    pattern = self.transform(pattern)
  File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 67, in __call__
    img = t(img)
  File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 226, in forward
    return F.normalize(tensor, self.mean, self.std, self.inplace)
  File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.9/site-packages/torchvision/transforms/functional.py", line 284, in normalize
    tensor.sub_(mean).div_(std)
RuntimeError: output with shape [1, 32, 32] doesn't match the broadcast shape [3, 32, 32]

Process finished with exit code 1
Read more comments on GitHub >

github_iconTop Results From Across the Web

GDumb: A Simple Approach that Questions Our Progress in ...
Given a memory budget, the sampler greedily stores samples from a data-stream while making sure that the classes are balanced, and, at inference,...
Read more >
GDumb: A Simple Approach that Questions Our Progress in ...
Given a memory budget, the sampler greedily stores samples from a data-stream while making sure that the classes are balanced, and, at inference,...
Read more >
Continual Learning With a Memory of Diverse Samples
where a model is strongly confident. Algorithm 1 summarizes our proposed diversity-aware memory update algorithm. Following GDumb [38], we also.
Read more >
Continual Recognition with Adaptive Memory Update
Our continual recognition model with adaptive memory update is capable of overcoming the problem of catastrophic forgetting with various new ...
Read more >
GDumb: A Simple Approach that Questions ... - ResearchGate
To validate this, we propose GDumb that (1) greedily stores samples in memory as they come and; (2) at test time, trains a...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found