Why reshaping the top blob in RPN ?
See original GitHub issueI don’t understand why there is this line :
top[0].reshape(1, 5)
in this file.
According to what I have understand, and according to the comment juste before this line :
# rois blob: holds R regions of interest, each is a 5-tuple # (n, x1, y1, x2, y2) specifying an image batch index n and a # rectangle (x1, y1, x2, y2)
Are you sure this reshape is correct ?
Issue Analytics
- State:
- Created 6 years ago
- Comments:7
Top Results From Across the Web
Why does it need a reshape layer in RPN's cls layer? #292
I am trying to understand the flow of data alongside the RPN and have some confusions. Here's my questions based on the VGG...
Read more >neural network - How to reshape a blob in Caffe?
One way I come up with is to reshape the bottom blob of the shape N x C x H x W to...
Read more >fast_rcnn.config.cfg.TRAIN.IMS_PER_BATCH Example
HAS_RPN: top[idx].reshape(1, 3) self. ... _name_to_top_map['gt_boxes'] = idx idx += 1 else: # not using RPN # rois blob: holds R regions of...
Read more >src/ndl_layout/mmdetection/mmdet/models/dense_heads ...
from .rpn_test_mixin import RPNTestMixin. @HEADS.register_module() ... rpn_cls_score = rpn_cls_score.reshape(batch_size, -1, 2) ... Get top-k prediction.
Read more >https://raw.githubusercontent.com/opencv/opencv_ex...
Based on https://github.com/rbgirshick/py-faster-rcnn/blob/master/models/ ... type: "Convolution" bottom: "conv5_3" top: "rpn/output" convolution_param ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
OK, I just saw the line you referred to.
This
top
, including othertop
s, that defined insetup()
is used for caffe to perform the check at network initialization stage. The caffe will try to check if the dimension of all the blobs matches. Imagine that caffe create a dummy data according to your top’s shape and let it flow through the whole network to check if the dimension of each layer’s output in valid in all the subsequent layers. Therefore, you don’t need to set thetop
to (N,5) cuz it’s the shape that matters instead of the data inside. And of course, you cannot set thetop
to any other shape, it will failed when caffe performing the initialization. Hope this would help 😄Then in
forward()
function which is the time that caffe has your real data flowing around, in this function, you can spit out thetop
that has dynamic data shape, say (2000, 5) as this proposal layer will normally do.