EpochOutputStorage
See original GitHub issue🚀 Feature
As discussed with @vfdev-5 in #309, it could be sometimes useful to provide a handler to store all output prediction history for visualization purposes. Following is my first try to implement it.
import torch
from ignite.engine import Events
class EpochOutputStore(object):
"""EpochOutputStore handler to save output prediction and target history
after every epoch, could be useful for e.g., visualization purposes.
Note:
This can potentially lead to a memory error if the output data is
larger than available RAM.
Args:
output_transform (callable, optional): a callable that is used to
transform the :class:`~ignite.engine.engine.Engine`'s
``process_function``'s output into the form `y_pred, y`, e.g.,
lambda x, y, y_pred: y_pred, y
Examples:
.. code-block:: python
import ...
eos = EpochOutputStore()
trainer = create_supervised_trainer(model, optimizer, loss)
train_evaluator = create_supervised_evaluator(model, metrics={"acc": Accuracy()})
eos.attach(train_evaluator)
@trainer.on(Events.EPOCH_COMPLETED)
def log_training_results(engine):
train_evaluator.run(train_loader)
y_pred, y = eos.get_output()
# plottings
"""
def __init__(self, output_transform=lambda x: x):
self.predictions = None
self.targets = None
self.output_transform = output_transform
def reset(self):
self.predictions = []
self.targets = []
def update(self, engine):
y_pred, y = self.output_transform(engine.state.output)
self.predictions.append(y_pred)
self.targets.append(y)
def attach(self, engine):
engine.add_event_handler(Events.EPOCH_STARTED, self.reset)
engine.add_event_handler(Events.ITERATION_COMPLETED, self.update)
def get_output(self, to_numpy=False):
prediction_tensor = torch.cat(self.predictions, dim=0)
target_tensor = torch.cat(self.targets, dim=0)
if to_numpy:
prediction_tensor = prediction_tensor.cpu().detach().numpy()
target_tensor = target_tensor.cpu().detach().numpy()
return prediction_tensor, target_tensor
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (4 by maintainers)
Top Results From Across the Web
EpochOutputStore — PyTorch-Ignite v0.4.10 Documentation
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
Read more >Principaux thèmes du projet GitHub pytorch/ignite | Page 7 ...
vfdev-5·15 août 2020·7 Commentaires. 1. EpochOutputStorage. enhancement help wanted. ZhiliangWu picture ZhiliangWu·31 juil. 2020·5 Commentaires ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi @vfdev-5, yes, I would agree that save everything on a
data
list is more general. Maybe I am a bit too focused on my understanding ofoutput
being the output of the network. I will try to figure out the PR process and send an update.@ZhiliangWu thank you for this FR! It looks good!
EDIT : it is a FR and I react as it was a PR, sorry 😊 The following is only if you would contribute with a PR
Please follow the contribution guideline https://github.com/pytorch/ignite/blob/master/CONTRIBUTING.md
In particular, you have to use Pull Request (PR) from GitHub. If you are not familiar, look https://github.com/pytorch/ignite/blob/master/CONTRIBUTING.md#send-a-pr
In addition, could you provide some tests? It would be very nice to add this handler in the doc. I suppose it will be located in
ignite.contrib.handlers
so I think relevant doc is https://github.com/pytorch/ignite/blob/master/docs/source/contrib/handlers.rstThank you again 😊