question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Bi-Directional Context Referencing (PubSub)

See original GitHub issue

Let’s say that we’re using NEAT to evolve neural networks. Let’s say that within that context we’re trying to keep track of innovationIDs - i.e. what connection is new to the population. In other words, when using NEAT, connections get created and destroyed all-the-time (almost constantly); what we want to to keep track of is discoveries (i.e. innovations) to the to the topology of any network in a population (i.e. group) thereof. By giving each neuron in a network an ID and using Cantor Pairing - we can track any time that a connection is “structurally” innovative. Cantor Pairs allows us to uniquely keep track and map the unique integer IDs of the Nodes to a unique ID of a connection; put another way, even if a connection between two nodes gets destroyed, we will know that it was already introduced to the population a while ago.

To be able to keep track of all of these things we need to able to have connections “communicate” with the population - and vice-a-versa. The “classes” in between should work as universal translators and communicators between the two classes.

Allowing networks to create new connections, that are potentially innovative, while concurrently updating the population’s list of structural innovations.

This can get done with EventEmitters and EvenListeners. Where multiple contextually relevant objects communicate with each other on event based triggers.

Additional Information @christianechevarria and @luiscarbonell contemplated this idea while implementing “full-NEAT” into Liquid Carrot

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
luiscarbonellcommented, Nov 10, 2019

I was looking for a way to manage a bunch of events. I’m trying to figure out an …

Expanding on this a bit…

Basically, there are a bunch of different class instances emitting events - willy-nilly - and there are a bunch of things that depends on those processes “finishing” (or emitting an “end” event) to continue working…and if the work that the individual objects needs depends on multiple events being triggered, then you can get a little bit of a crazy scenario.

So as a way to handle this they created an architecture that handles this with a simple idea:

Events -> EventHandlers -> Streams -> StreamHandlers -> Jobs -> JobsHandlers -> Events

Basically, at any given point a bunch of Events can be triggered and need to be sorted into buckets (i.e. Streams). Streams are being stuffed with information on an “on arrival” basis and the information in them is not necessarily sorted or synced across streams; so frequently, before the information can be processed you need a “StreamHandler” that is reading the first things out of the streams and grouping them with the first things out of other streams into jobs - or pushing them further back in the streams so the next information can get grouped/processed.

Information that successfuly gets grouped from the streams get turned into a “Job” or the smallest executable piece of code. Some of those jobs can be done in parallel - some depend on previous jobs being done. The JobHandler manages that executional pattern - and once done triggers a bunch of new events.

image

1reaction
luiscarbonellcommented, Nov 10, 2019

@GavinRay97

You might find the complexity level and cognitive burden can be kept much lower if you use an Observer/Observable pattern instead of Publisher/Subscriber.

I was looking for a way to manage a bunch of events. I’m trying to figure out an API that I can use to serialize/queue up everything into “bite-size” operations, that can be reduced/joined into matrix operations for GPU consumption.

I think this will be how we deal with the variable size matrix problem.

I was looking at stream merging utilities and event-queues - the idea was to go from an event to a queue of seperate streams that can be grouped together, chunked, and queried as an array.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Pull subscriptions | Cloud Pub/Sub Documentation
The StreamingPull API relies on a persistent bidirectional connection to receive multiple messages as they become available.
Read more >
Bi-directional messaging using Zero MQ pub sub pattern
I have a network of multiple consumers and producers that talk to each other in a bi-directional flow. .i.e a consumer can occasionally...
Read more >
Publisher-Subscriber pattern - Azure Architecture Center
Learn about the Publisher-Subscriber pattern, which enables an application to announce events to many interested consumers asynchronously.
Read more >
UA Part 14: PubSub Main Technology Features
The session is established by the OPC UA Client that must connect to the OPC UA Server before any data can be exchanged...
Read more >
Messaging Pattern: Publish-Subscribe - A. Rothuis
In a previous post, we have seen messaging primitives: events, commands and queries. In this post, we will take an extensive look at ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found