Bi-Directional Context Referencing (PubSub)
See original GitHub issueLet’s say that we’re using NEAT to evolve neural networks. Let’s say that within that context we’re trying to keep track of innovationIDs - i.e. what connection is new to the population. In other words, when using NEAT, connections get created and destroyed all-the-time (almost constantly); what we want to to keep track of is discoveries (i.e. innovations) to the to the topology of any network in a population (i.e. group) thereof. By giving each neuron in a network an ID and using Cantor Pairing - we can track any time that a connection is “structurally” innovative. Cantor Pairs allows us to uniquely keep track and map the unique integer IDs of the Nodes to a unique ID of a connection; put another way, even if a connection between two nodes gets destroyed, we will know that it was already introduced to the population a while ago.
To be able to keep track of all of these things we need to able to have connections “communicate” with the population - and vice-a-versa. The “classes” in between should work as universal translators and communicators between the two classes.
Allowing networks to create new connections, that are potentially innovative, while concurrently updating the population’s list of structural innovations.
This can get done with EventEmitters
and EvenListeners
. Where multiple contextually relevant objects communicate with each other on event based triggers.
Additional Information @christianechevarria and @luiscarbonell contemplated this idea while implementing “full-NEAT” into Liquid Carrot
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (4 by maintainers)
Top GitHub Comments
Expanding on this a bit…
Basically, there are a bunch of different class instances emitting events - willy-nilly - and there are a bunch of things that depends on those processes “finishing” (or emitting an “end” event) to continue working…and if the work that the individual objects needs depends on multiple events being triggered, then you can get a little bit of a crazy scenario.
So as a way to handle this they created an architecture that handles this with a simple idea:
Events -> EventHandlers -> Streams -> StreamHandlers -> Jobs -> JobsHandlers -> Events
Basically, at any given point a bunch of Events can be triggered and need to be sorted into buckets (i.e. Streams). Streams are being stuffed with information on an “on arrival” basis and the information in them is not necessarily sorted or synced across streams; so frequently, before the information can be processed you need a “StreamHandler” that is reading the first things out of the streams and grouping them with the first things out of other streams into jobs - or pushing them further back in the streams so the next information can get grouped/processed.
Information that successfuly gets grouped from the streams get turned into a “Job” or the smallest executable piece of code. Some of those jobs can be done in parallel - some depend on previous jobs being done. The JobHandler manages that executional pattern - and once done triggers a bunch of new events.
@GavinRay97
I was looking for a way to manage a bunch of events. I’m trying to figure out an API that I can use to serialize/queue up everything into “bite-size” operations, that can be reduced/joined into matrix operations for GPU consumption.
I think this will be how we deal with the variable size matrix problem.
I was looking at stream merging utilities and event-queues - the idea was to go from an event to a queue of seperate streams that can be grouped together, chunked, and queried as an array.