question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

resume the training for pytorch 1.0

See original GitHub issue

When I use the pytorch1.0 branch, it can train for pascal VOC dataset. But when I break the training and resume from the previous model, I got the RuntimeError. Did anyone have this problem? The error are described as followed:

Loaded dataset voc_2007_trainval for training Set proposal method: gt Appending horizontally-flipped training examples… voc_2007_trainval gt roidb loaded from /home/user02/notebook/faster-rcnn.pytorch-pytorch-1.0/data/cache/voc_2007_trainval_gt_roidb.pkl done Preparing training data… done before filtering, there are 10022 images… after filtering, there are 10022 images… 10022 roidb entries Loading pretrained weights from data/pretrained_model/vgg16_caffe.pth loading checkpoint models/vgg16/pascal_voc/faster_rcnn_1_3_10021.pth loaded checkpoint models/vgg16/pascal_voc/faster_rcnn_1_3_10021.pth Traceback (most recent call last): File “trainval_net.py”, line 355, in <module> optimizer.step() File “/usr/local/lib/python3.5/dist-packages/torch/optim/sgd.py”, line 101, in step buf.mul_(momentum).add_(1 - dampening, d_p) RuntimeError: The size of tensor a (512) must match the size of tensor b (18) at non-singleton dimension 0

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:5

github_iconTop GitHub Comments

3reactions
AlexanderHustinxcommented, Apr 17, 2019
0reactions
AlexanderHustinxcommented, Apr 17, 2019

Great, happy to see it worked for you!

I’m using the PyTorch-1.0 branch as well, there is an inconsistency when using the listed trained models, as it appears there was a slight change in PyTorch-1.0 compared to PyTorch-0.4.0, not sure what exactly, but I believe it is listed in one of the Issues in this repo. When training the model myself the results are rather consistent, but there is a slight difference in performance (0.5~2.0 mAP) for me.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Resuming from model checkpoints produces different training ...
Hello, for the last 2 days I am trying to solve issue when resuming training from model checkpoint. Problem is that the training...
Read more >
Resuming should allow to differentiate what to resume (steps ...
Currently it is possible to either resume only the full training state (epoch/global steps / optimizer / scheduler options / and weights), ...
Read more >
PyTorch Lightning 1.0: From 0–600k - Medium
This makes sure you can resume training in case it was interrupted. You can customize the checkpointing behavior to monitor any quantity of...
Read more >
Saving and Loading Models — PyTorch Tutorials 1.0.0 ...
When saving a general checkpoint, to be used for either inference or resuming training, you must save more than just the model's state_dict....
Read more >
Trainer — PyTorch Lightning 1.8.5.post0 ... - Read the Docs
default used by the Trainer trainer = Trainer(limit_train_batches=1.0) # run through only 25% of the training set each epoch trainer ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found