Alternative version / replacement of Replay Plugin
See original GitHub issueCurrently, Replay plugin keeps an external memory of patterns which are concatenated to the current step dataset. Minibatches are then sampled from this new dataset.
We could provide an additional Replay version or substitute the current one by concatenating to each minibatch a random subset of the external memory. In this way, there is no need to concatenate to the entire dataset and replay is more effective (even if it costs more in terms of computation).
This can be done in before_forward
which is only applied to the training phase.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:3
- Comments:15 (3 by maintainers)
Top Results From Across the Web
Record and replay things in Minecraft with ... - YouTube
This tutorial will show you how to use the Advanced Replays Plugin to record what you do and the replay it as many...
Read more >X Replay | Advanced Replay Plugin v 2.2.9 - SpigotMC
X Replay is a minecraft replay plugin which lets you record and replay every game automatically.
Read more >OpenReplay is developer-friendly, open-source session replay.
It's the only open-source alternative to products such as FullStory and LogRocket. Session replay. OpenReplay replays what users do, but not only. It...
Read more >ModResponse - Mock and replay API
Mock, modify, and replay API. Easy setup. No proxy needed. No code change required.
Read more >Minecraft Replay Mod - Craft your Moment
The official website for the Minecraft Replay Mod, a Modification to allow players to record their game sessions and replay them from any...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Just a quick note: you can already split the experiences in datasets of 1 example each, so technically you could have a stream of data without “boundaries”. Of course, that would be very inefficient in pytorch. I agree with the idea of fixing this strategies side!
I see, but I don’t think we would have to remove experiences. The data stream can still be divided in experiences, it’s just the methods should not take advantage or even be aware of the boundaries of experiences. (e.g. the real-life data stream can happen to be just like the typical Split-MNIST scenario) So I think a simple solution with a custom dataloader like @AntonioCarta proposed, could already solve this issue (: