Example `eval_plugin` not working.
See original GitHub issueI got an error during the test phase of the eval_plugin
example. I am executing python eval_plugin.py
with the avalanche-env
virtualenv.
This is the output:
Starting experiment...
Start of step: 0
Current Classes: [4, 5]
-- >> Start of training phase << --
-- Starting training on step 0 (Task 0) --
.................................................. [ 50 iterations]
.................................................. [ 100 iterations]
.............
Epoch 0 ended. Loss: 0.743785, accuracy 71.3309%
.................................................. [ 50 iterations]
.................................................. [ 100 iterations]
.............
Epoch 1 ended. Loss: 0.368931, accuracy 85.3947%
.................................................. [ 50 iterations]
.................................................. [ 100 iterations]
.............
Epoch 2 ended. Loss: 0.302345, accuracy 88.6087%
.................................................. [ 50 iterations]
.................................................. [ 100 iterations]
.............
Epoch 3 ended. Loss: 0.247078, accuracy 91.5032%
-- >> End of training phase << --
Training completed
Computing accuracy on the whole test set
-- >> Start of test phase << --
-- Starting test on step 0 (Task 0) --
+++++++++++++++++++
> Test on step 0 (Task 0) ended. Loss: 0.141121, accuracy 94.7705%
Traceback (most recent call last):
File "eval_plugin.py", line 105, in <module>
main()
File "eval_plugin.py", line 101, in main
results.append(cl_strategy.test(scenario.test_stream, num_workers=4))
File "/home/cossu/avalanche/avalanche/training/strategies/base_strategy.py", line 178, in test
self.after_test_step(**kwargs)
File "/home/cossu/avalanche/avalanche/training/strategies/base_strategy.py", line 350, in after_test_step
p.after_test_step(self, **kwargs)
File "/home/cossu/avalanche/avalanche/training/plugins.py", line 425, in after_test_step
self._update_metrics(evaluation_data)
File "/home/cossu/avalanche/avalanche/training/plugins.py", line 327, in _update_metrics
metric_result = metric(evaluation_data)
File "/home/cossu/avalanche/avalanche/evaluation/abstract_metric.py", line 164, in __call__
metric_result = listener(eval_data)
File "/home/cossu/avalanche/avalanche/evaluation/metrics/confusion_matrix.py", line 112, in result_emitter
cm_image = self._image_creator(metric_value)
File "/home/cossu/avalanche/avalanche/evaluation/metric_utils.py", line 66, in default_cm_image_creator
xticks_rotation=xticks_rotation, values_format=values_format)
File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.7/site-packages/sklearn/metrics/_plot/confusion_matrix.py", line 109, in plot
xlabel="Predicted label")
File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.7/site-packages/matplotlib/artist.py", line 1071, in set
return self.update(props)
File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.7/site-packages/matplotlib/artist.py", line 974, in update
ret = [_update_property(self, k, v) for k, v in props.items()]
File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.7/site-packages/matplotlib/artist.py", line 974, in <listcomp>
ret = [_update_property(self, k, v) for k, v in props.items()]
File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.7/site-packages/matplotlib/artist.py", line 971, in _update_property
return func(v)
File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.7/site-packages/matplotlib/axes/_base.py", line 3815, in set_yticklabels
minor=minor, **kwargs)
File "/home/cossu/miniconda3/envs/avalanche-env/lib/python3.7/site-packages/matplotlib/axis.py", line 1707, in set_ticklabels
for t in ticklabels:
TypeError: 'NoneType' object is not iterable
Issue Analytics
- State:
- Created 3 years ago
- Comments:12 (6 by maintainers)
Top Results From Across the Web
How to use Eval Plugin with vim + coc + haskell language ...
I'd like to report, that I can run the eval code lens in coc-nvim. So it is at least not completely broken.
Read more >X4 EVAL Plugin Problem? - InstallAware
Hello, I have switched my MSI code from IA17 to X4. There is a line with Eval Plugin to check if a software...
Read more >hls-eval-plugin: Eval plugin for Haskell Language Server
This is mainly useful to test and document functions and to quickly evaluate small expressions.
Read more >plugin is not working in Webpack with React JS - Stack Overflow
I discovered that the line devtool: 'eval-source-map', was preventing TerserPlugin from working. So, I have eliminated it from production.
Read more >insomnia-plugin-response-eval
Install the insomnia-plugin-response-eval plugin from Preferences > Plugins. Usage. example. Author. Author Avatar Taylor Goolsby. Version ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’m also unable to reproduce the issue described by @AndreaCossu. But even more strange, I’m even unable to reproduce the issue described by @lrzpellegrini! I tried the script 5 times and any time the script completed with no error of any kind. As pointed out by @vlomonaco I think we have some kind of intermittent memory bug, that behave differently in different OS/environment/PCs.
with pytorch 1.7.0 py3.6_cuda11.0.221_cudnn8.0.3_0 I’m not able to replicate the issue. It may indeed be related to a pytorch version. We can close this issue for now if you agree.