question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

The work for 2.0 is really taking root. After getting recurrent neural networks in v1 (not yet released) I could see a great deal how we could make it simpler, but I also wanted to address the entire architecture as a whole. We can simplify the strategies of a recurrent neural network and a feedforward neural network, so that more advanced concepts, like convolution could easily be used. Also I really want to make the api’s for both feedforward and recurrent to be as close as possible, so you could simply change out the underlying network type, and recurrent is what you’d have. After a lot of careful consideration, this is what I’ve got:

Nets will have at least two means of composition, imperative and functional. They can be mixed.

This is an imperative defined feedforward convolutional neural network:

new brain.FeedForward({
  inputLayer: () => input({ width: 24, height: 24 }),
  hiddenLayers: [
    (input) => convolution({ filterCount: 8, filterWidth: 5, filterHeight: 5, padding: 2, stride: 1 }, input),
    (input) => relu(input),
    (input) => pool({ padding: 2, stride: 2 }, input),
    (input) => convolution({ padding: 2, stride: 1, filterCount: 16, filterWidth: 5, filterHeight: 5 }, input),
    (input) => relu(input),
    (input) => pool({ width: 3, stride: 3 }, input),
    (input) => softMax({ width: 10 }, input)
  ],
  outputLayer: (input) => output({ width: 10 }, input)
});

This is a functional defined feedforward convolutional neural network:

new brain.FeedForward({
  inputLayer: () => input({ width: 24, height: 24 }),
  hiddenLayers: [
    (input) =>
      softMax({ width: 10 },
        pool({ width: 3, stride: 3 },
          relu(
            convolution({ padding: 2, stride: 1, filterCount: 16, filterWidth: 5, filterHeight: 5 },
              pool({ padding: 2, stride: 2 },
                relu(
                  convolution({ filterCount: 8, filterWidth: 5, filterHeight: 5, padding: 2, stride: 1 },
                    input
                  )
                )
              )
            )
          )
        )
      )
  ],
  outputLayer: (input) => output({ width: 10 }, input)
});

Both of these create the exact same type of network, the reason for the two strategies is that I really like to essentially not have to learn a language, and the functional approach matches up to math nearly one to one. The imperative version too, sometimes you just don’t want to think in reverse, and lists are nice. For recurrent nets I think the functional will shine more as you start the mind-bending process of recursion. I’m open to suggestion!

Later we can apply the same composition to do:

const net = new brain.Recurrent({ inputLayer, hiddenLayers, outputLayer });

I’m sure there will be more, but this is a start.

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Reactions:8
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
robertleeplummerjrcommented, Dec 28, 2017

Now that GPU.js is released as v1.0.0, we are full on! For reference: https://gist.github.com/robertleeplummerjr/4aaf8afb177c9c80f8452d5025117e26

0reactions
robertleeplummerjrcommented, Oct 12, 2017

So here is the latest in trends for LSTM and recurrent behavior. We need to define a single set of programs that can be reused, and a simplified set of terminologies that describe them.

Here is the commit: https://github.com/BrainJS/brain.js/commit/d43b1ee8644bfffa7eb27da3fde4ef2e32c819dd#diff-b84ad68a55b45bd31698c5356097abf3R15

The idea behind it:

We need a way to define groups of layers that compose a single layer. LSTM, for example, is very simple in that it uses the same technology as RNN but it is a bit bigger. If we describe each step in the LSTM as a “layer” it becomes very hairy. However, if we describe each mathematical operation as part of the layer, the “layer” is actually composed of many “layers”. But don’t think about that. Just think of that if we want a complex layer, it can have multiple mathematical operations. To simplify (ha!) the understanding of this, we have the concept of “group”. I’ll try my best to describe it:

export default class SuperLayer extends Group {
  constructor(settings) {
    super(settings);
    //setup layer here
  }

  static kernel(settings) {
    return (layer, inputLayer, previousOutputs) => {
      //layer passed in above ^
      //tons of layers and math
      return result;
    };
  }
}

So the concept here is that the kernel gets created once, however the layer can be composed any number of times. The recurrent behavior stacks the layers, the layers can use add, multiply, sigmoid, relu, etc. inside the kernel, and then out comes the result. So we have one program to manage the operations.

Feedback welcomed!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Fact Sheet: BLM's Proposed Planning Rule Overview
BLM's ongoing Planning 2.0 initiative, an effort that strives to rethink the ... plans that provide the framework for the management of public...
Read more >
BLM Land Use Planning 2.0
BLM updated its land use planning procedures on December 12, 2016, aiming to bring more stakeholders into the land management process, and to...
Read more >
Welcome to Strategic Planning 2.0
Reimagining today's often stale and numbers-based strategic planning process as a platform for change and a road map for flawless execution.
Read more >
Resource Management Planning
Through Planning 2.0, the BLM seeks to improve the resource management planning process, including the development, amendment, and maintenance ...
Read more >
Plan to End Homelessness
Chicago's Plan 2.0 is a broad-ranging, seven-year action plan that reaffirms, builds on the core tenets outlined in Chicago's original Plan to End ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found