Train new dataset: zeros after conv3 in vgg16
See original GitHub issueI am trying to train the model with my own dataset. Sometimes , I got this error
File "train.py", line 127, in <module>
net(im_data, im_info, gt_boxes, gt_ishard, dontcare_areas)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/data/code/faster_rcnn_pytorch/faster_rcnn/faster_rcnn.py", line 219, in forward
roi_data = self.proposal_target_layer(rois, gt_boxes, gt_ishard, dontcare_areas, self.n_classes)
File "/data/code/faster_rcnn_pytorch/faster_rcnn/faster_rcnn.py", line 287, in proposal_target_layer
proposal_target_layer_py(rpn_rois, gt_boxes, gt_ishard, dontcare_areas, num_classes)
File "/data/code/faster_rcnn_pytorch/faster_rcnn/rpn_msr/proposal_target_layer.py", line 66, in proposal_target_layer
np.hstack((zeros, np.vstack((gt_easyboxes[:, :-1], jittered_gt_boxes[:, :-1]))))))
File "/usr/local/lib/python2.7/dist-packages/numpy/core/shape_base.py", line 234, in vstack
return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
ValueError: all the input array dimensions except for the concatenation axis must match exactly
I traced the bug and figure out that it returns zeros array after conv3 in faster_rcnn/vgg16.py
, hence return zero-array feature after forwarding through vgg16
Do you have any clue why ? Thank yah.
Issue Analytics
- State:
- Created 6 years ago
- Comments:19
Top Results From Across the Web
Everything you need to know about VGG16 | by Great Learning
This blog will give you an insight into VGG16 architecture and explain the same using a use-case for object detection.
Read more >VGGNet-16 Architecture: A Complete Guide - Kaggle
Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources.
Read more >Step by step VGG16 implementation in Keras for beginners
Here I am creating and object of ImageDataGenerator for both training and testing data and passing the folder which has train data to...
Read more >Number of non-zero parameters in VGG16 convolutional ...
Recently, a new method for filter pruning was explored in Ref. [71] which is based on the sparsity induction of weights. The proposed...
Read more >Hands-on Transfer Learning with Keras and the VGG16 Model
We use the pre-trained model's architecture to create a new dataset from our input images in this approach. We'll import the Convolutional and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@abhiML I am refactoring the program, and it’s still ongoing. So I have not get it worked as so far.
I had the issue described and I now seem to be able to train without this error when using SDG or if you use ADAM loss will equal NAN, I would suggest you check the values in the gt_boxes of any image cause this error. For me when reading the xml files it was assigning some negative values which where being transformed to huge numbers. Also the PASCALVOC uses -1 on the XMIN and YMIN so if your bounding boxes are set at 0 they will be set to -1 and this caused issues as well. I fixed this in my _load_AFLW_annotation function by making sure the absolute value was taken and if a value was equal to 0 don’t do a subtraction. This may help.