Calculating number of linear regions
See original GitHub issueDear authors,
I am having a question for calculating number of linear regions. It seems that in TE-NAS, input images are augmented to be of size (1000,1,3,3):
lrc_model = Linear_Region_Collector(input_size=(1000, 1, 3, 3), sample_batch=3, dataset=xargs.dataset, data_path=xargs.data_path, seed=xargs.rand_seed)
Could you explain what the reason is behind this?
Issue Analytics
- State:
- Created 2 years ago
- Comments:6
Top Results From Across the Web
On the Number of Linear Regions of Deep ... - NIPS papers
We underpin this idea by estimating the number of linear regions of functions computable by two important types of piecewise linear networks: with...
Read more >On the Number of Linear Regions of Convolutional Neural ...
Through this we provide the exact formula for the maximal number of linear regions of a one-layer ReLU CNN N and show that...
Read more >On the Number of Linear Regions of Convolutional ... - ICML
How to calculate the number RN of linear regions for a given DNN architecture N? Most known results are about fully-connected ReLU NNs....
Read more >on the Number of Linear Regions of Deep Neural Networks
Applying this formula for each distinct linear region computed by the last hidden layer, a set denoted with P^L, we get the maximal...
Read more >Complexity of Linear Regions in Deep Networks - arXiv
For each new neuron, we calculate the linear function it defines on each region, and determine whether that region is split into two....
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hello, I’m trying to calculate number of linear regions but the linear region collector always returns number of dimensions (the same number for all networks).
Here are functions from
lib/procedures/linear_region_counter.py
which computes number of linear regions. However, as far as I understand, if we have ReLU network, then after each layer we have non negative outputs sotorch.sign
will return 0 or 1. So, according to the first function,self.activations
matrix will contain only 0 or 1 as it’s elements. In this case result of the lines (1), (2) and (3) will always be an identity matrix. Could you explain the idea behind the algorithm? Thank youHi @taoyang1122 and @maryanpetruk,
We used V100 GPU to train ImageNet. It is true that training-from-scratch on ImageNet is slow: 4~5 days are very common.