question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Cannot find multigride architecture

See original GitHub issue

Hi,@rishizek: Thanks for your implementation of deeplabV3, but I cannot find the multigride architecture as you mentioned in the Evaluation of README file: “repo | MG(1,2,4)+ASPP(6,12,18)+Image Pooling | 16 | 76.42%”.Can you tell me where did you put the code of this multigride architecture? blocks = [ resnet_v2_block('block1', base_depth=64, num_units=3, stride=2), resnet_v2_block('block2', base_depth=128, num_units=4, stride=2), resnet_v2_block('block3', base_depth=256, num_units=23, stride=2), resnet_v2_block('block4', base_depth=512, num_units=3, stride=1), ]

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
rishizekcommented, Jun 7, 2018

Hi @FlyingIce1 , thank you for your interest in the repo.

It is complex but the multigrid architecture is implemented in slim library. The key parameter is output_stride, which you can find document explanation here and here. The default output stride of ResNet is 32, as you may know from ResNet and Deeplab papers. When you set output_stride = 16, the slim module automatically sets MG(1,2,4). More concretely, if you set ouput_stirde = 16, after current_stride=16 here, the rate parameter (which is also know as atrous convolution rate parameter) here will be multiplied instead of stride afterword. And this creates MG(1,2,4). The resnet_v2_block() functions you mentioned will eventually return resnet_utils.Block() class here if you read code carefully, so it is OK to overwrite rate parameter later on.

I hope this answers your question.

0reactions
northeastsquarecommented, Apr 16, 2019

@luke-evans-liu @a7b23 @rishizek no mater output_stride=16 or 8, the atrous rate is always (1,1,1), reason as follows, so where am I wrong? because https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/contrib/slim/python/slim/nets/resnet_v2.py#L208 output_stride=output_stride/4, everytime https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/contrib/slim/python/slim/nets/resnet_utils.py#L216 current_stride *= unit.get(‘stride’, 1), current_stride=1 alwalys since unit.get(‘stride’, 1) =1, and output_stride=4 or 2 (as output_stride/4, as upper) since https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/contrib/slim/python/slim/nets/resnet_v2.py#L276 bock4 stride=1. this line https://github.com/tensorflow/tensorflow/blob/23c218785eac5bfe737eec4f8081fd0ef8e0684d/tensorflow/contrib/slim/python/slim/nets/resnet_utils.py#L211 will never be executed.

so can I change the code mannually, https://github.com/tensorflow/tensorflow/blob/23c218785eac5bfe737eec4f8081fd0ef8e0684d/tensorflow/contrib/slim/python/slim/nets/resnet_utils.py#L210 to make rate=[1,2,4] ? that is to say, change the code as following: ` for ib, block in enumerate(blocks): with variable_scope.variable_scope(block.scope, ‘block’, [net]) as sc: for i, unit in enumerate(block.args): if output_stride is not None and current_stride > output_stride: raise ValueError(‘The target output_stride cannot be reached.’)

    with variable_scope.variable_scope('unit_%d' % (i + 1), values=[net]):
      # If we have reached the target output_stride, then we need to employ
      # atrous convolution with stride=1 and multiply the atrous rate by the
      # current unit's stride for use in subsequent layers.
      print('ib:', ib) 
      if ib == 3:
        rate = pow(2, i)
        print("rate:", rate)
        net = block.unit_fn(net, rate=rate, **dict(unit, stride=1))
      elif output_stride is not None and current_stride == output_stride:
        net = block.unit_fn(net, rate=rate, **dict(unit, stride=1))
        rate *= unit.get('stride', 1)

      else:
        net = block.unit_fn(net, rate=1, **unit)
        current_stride *= unit.get('stride', 1)
  net = utils.collect_named_outputs(outputs_collections, sc.name, net)

`

Read more comments on GitHub >

github_iconTop Results From Across the Web

buttomnutstoast/Multigrid-Neural-Architectures - GitHub
We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network ...
Read more >
Multigrid Neural Memory
Abstract. We introduce a novel approach to endowing neu- ral networks with emergent, long-term, large- scale memory. Distinct from strategies that.
Read more >
Algebraic MultiGrid Preconditioners for Sparse Linear Solvers ...
Your browser can't play this video. ... Algebraic MultiGrid Preconditioners for Sparse Linear Solvers@Extreme Scales on Hybrid Architectures.
Read more >
Multigrid on Massively Parallel Architectures - OSTI.gov
Multigrid on Massively. Parallel Architectures. R.D. Falgout, J.E. Jones. This paper was prepared for submittal to the. 6th European Multigrid Conference, ...
Read more >
MGiaD: Multigrid in all dimensions. Efficiency and robustness ...
neural architecture search. (Alvarez and Salzmann, 2018; Gordon et al., 2018) as well as the development of resource-efficient architectural ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found