question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Checkpoints stop saving/checkpoint questions

See original GitHub issue

Hi, I was curious about how checkpoints work, I think I have an idea of what’s going on but some clarification would be nice.

When training my model, 85 training and 10 testing, the models stop producing checkpoints after a certain amount, 3 or 4 epochs (and another at 32). I’m just curious as to why it does this? I’m currently at 200~ Epochs and no additional checkpoints have been written.

Some clarification on the checkpoint names might also be useful as well. We have mask_rcnn_model.{epoch number}-{value}.h5. What is {value}?

Thanks!

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

3reactions
hobbitsyfeetcommented, May 8, 2021

Thank you @ayoolaolafenwa, @khanfarhan10, save_best_only = True, monitor = "val_loss" explains everything.

3reactions
khanfarhan10commented, May 8, 2021

Ah I was able to find it : {epoch number}-{value}.h5 in this line the value is the val_loss: value that you see that the model is training for!

For example you might see something like this while training :

Epoch 1/100
100/100 [==============================] - 262s 2s/step - batch: 49.5000 - size: 4.0000 - loss: 1.9837 - rpn_class_loss: 0.0468 - rpn_bbox_loss: 0.5846 - mrcnn_class_loss: 0.1386 - mrcnn_bbox_loss: 0.6123 - mrcnn_mask_loss: 0.6013 - val_loss: 1.6797 - val_rpn_class_loss: 0.0503 - val_rpn_bbox_loss: 0.4745 - val_mrcnn_class_loss: 0.1279 - val_mrcnn_bbox_loss: 0.5000 - val_mrcnn_mask_loss: 0.5269
Epoch 2/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.5059 - rpn_class_loss: 0.0346 - rpn_bbox_loss: 0.4181 - mrcnn_class_loss: 0.1050 - mrcnn_bbox_loss: 0.4751 - mrcnn_mask_loss: 0.4731 - val_loss: 1.6844 - val_rpn_class_loss: 0.0342 - val_rpn_bbox_loss: 0.5514 - val_mrcnn_class_loss: 0.1006 - val_mrcnn_bbox_loss: 0.4900 - val_mrcnn_mask_loss: 0.5082
Epoch 3/100
100/100 [==============================] - 149s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.5412 - rpn_class_loss: 0.0369 - rpn_bbox_loss: 0.4741 - mrcnn_class_loss: 0.1001 - mrcnn_bbox_loss: 0.4548 - mrcnn_mask_loss: 0.4753 - val_loss: 1.5671 - val_rpn_class_loss: 0.0410 - val_rpn_bbox_loss: 0.5464 - val_mrcnn_class_loss: 0.0748 - val_mrcnn_bbox_loss: 0.4559 - val_mrcnn_mask_loss: 0.4490
Epoch 4/100
100/100 [==============================] - 143s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.3866 - rpn_class_loss: 0.0311 - rpn_bbox_loss: 0.4055 - mrcnn_class_loss: 0.0932 - mrcnn_bbox_loss: 0.4168 - mrcnn_mask_loss: 0.4401 - val_loss: 1.5726 - val_rpn_class_loss: 0.0475 - val_rpn_bbox_loss: 0.4904 - val_mrcnn_class_loss: 0.1179 - val_mrcnn_bbox_loss: 0.4440 - val_mrcnn_mask_loss: 0.4727
Epoch 5/100
100/100 [==============================] - 145s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.3539 - rpn_class_loss: 0.0262 - rpn_bbox_loss: 0.3966 - mrcnn_class_loss: 0.0943 - mrcnn_bbox_loss: 0.4104 - mrcnn_mask_loss: 0.4263 - val_loss: 1.5633 - val_rpn_class_loss: 0.0362 - val_rpn_bbox_loss: 0.4701 - val_mrcnn_class_loss: 0.1183 - val_mrcnn_bbox_loss: 0.4771 - val_mrcnn_mask_loss: 0.4615
Epoch 6/100
100/100 [==============================] - 148s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.3324 - rpn_class_loss: 0.0271 - rpn_bbox_loss: 0.3885 - mrcnn_class_loss: 0.1029 - mrcnn_bbox_loss: 0.3906 - mrcnn_mask_loss: 0.4233 - val_loss: 1.4282 - val_rpn_class_loss: 0.0370 - val_rpn_bbox_loss: 0.4734 - val_mrcnn_class_loss: 0.0834 - val_mrcnn_bbox_loss: 0.4059 - val_mrcnn_mask_loss: 0.4285
Epoch 7/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.3536 - rpn_class_loss: 0.0334 - rpn_bbox_loss: 0.3842 - mrcnn_class_loss: 0.1045 - mrcnn_bbox_loss: 0.4039 - mrcnn_mask_loss: 0.4277 - val_loss: 1.4398 - val_rpn_class_loss: 0.0379 - val_rpn_bbox_loss: 0.4727 - val_mrcnn_class_loss: 0.0910 - val_mrcnn_bbox_loss: 0.4220 - val_mrcnn_mask_loss: 0.4161
Epoch 8/100
100/100 [==============================] - 143s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.2516 - rpn_class_loss: 0.0252 - rpn_bbox_loss: 0.3669 - mrcnn_class_loss: 0.0872 - mrcnn_bbox_loss: 0.3614 - mrcnn_mask_loss: 0.4110 - val_loss: 1.5298 - val_rpn_class_loss: 0.0297 - val_rpn_bbox_loss: 0.4810 - val_mrcnn_class_loss: 0.0840 - val_mrcnn_bbox_loss: 0.4629 - val_mrcnn_mask_loss: 0.4721
Epoch 9/100
100/100 [==============================] - 145s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.2158 - rpn_class_loss: 0.0225 - rpn_bbox_loss: 0.3552 - mrcnn_class_loss: 0.0920 - mrcnn_bbox_loss: 0.3452 - mrcnn_mask_loss: 0.4009 - val_loss: 1.4808 - val_rpn_class_loss: 0.0422 - val_rpn_bbox_loss: 0.4467 - val_mrcnn_class_loss: 0.1091 - val_mrcnn_bbox_loss: 0.4402 - val_mrcnn_mask_loss: 0.4426
Epoch 10/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.2483 - rpn_class_loss: 0.0232 - rpn_bbox_loss: 0.4124 - mrcnn_class_loss: 0.0806 - mrcnn_bbox_loss: 0.3359 - mrcnn_mask_loss: 0.3962 - val_loss: 1.6229 - val_rpn_class_loss: 0.0439 - val_rpn_bbox_loss: 0.5508 - val_mrcnn_class_loss: 0.1131 - val_mrcnn_bbox_loss: 0.4445 - val_mrcnn_mask_loss: 0.4708
Epoch 11/100
100/100 [==============================] - 143s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.2082 - rpn_class_loss: 0.0262 - rpn_bbox_loss: 0.3547 - mrcnn_class_loss: 0.0872 - mrcnn_bbox_loss: 0.3381 - mrcnn_mask_loss: 0.4020 - val_loss: 1.4842 - val_rpn_class_loss: 0.0316 - val_rpn_bbox_loss: 0.4568 - val_mrcnn_class_loss: 0.1049 - val_mrcnn_bbox_loss: 0.4470 - val_mrcnn_mask_loss: 0.4438
Epoch 12/100
100/100 [==============================] - 148s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.2189 - rpn_class_loss: 0.0257 - rpn_bbox_loss: 0.3599 - mrcnn_class_loss: 0.0864 - mrcnn_bbox_loss: 0.3403 - mrcnn_mask_loss: 0.4066 - val_loss: 1.3422 - val_rpn_class_loss: 0.0323 - val_rpn_bbox_loss: 0.4095 - val_mrcnn_class_loss: 0.0891 - val_mrcnn_bbox_loss: 0.3981 - val_mrcnn_mask_loss: 0.4133
Epoch 13/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.2425 - rpn_class_loss: 0.0244 - rpn_bbox_loss: 0.3723 - mrcnn_class_loss: 0.0881 - mrcnn_bbox_loss: 0.3488 - mrcnn_mask_loss: 0.4089 - val_loss: 1.4842 - val_rpn_class_loss: 0.0422 - val_rpn_bbox_loss: 0.5370 - val_mrcnn_class_loss: 0.0924 - val_mrcnn_bbox_loss: 0.4041 - val_mrcnn_mask_loss: 0.4086
Epoch 14/100
100/100 [==============================] - 143s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.1848 - rpn_class_loss: 0.0233 - rpn_bbox_loss: 0.3493 - mrcnn_class_loss: 0.0871 - mrcnn_bbox_loss: 0.3391 - mrcnn_mask_loss: 0.3861 - val_loss: 1.4737 - val_rpn_class_loss: 0.0371 - val_rpn_bbox_loss: 0.4678 - val_mrcnn_class_loss: 0.1234 - val_mrcnn_bbox_loss: 0.4232 - val_mrcnn_mask_loss: 0.4222
Epoch 15/100
100/100 [==============================] - 145s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.2337 - rpn_class_loss: 0.0271 - rpn_bbox_loss: 0.3792 - mrcnn_class_loss: 0.0943 - mrcnn_bbox_loss: 0.3390 - mrcnn_mask_loss: 0.3940 - val_loss: 1.4419 - val_rpn_class_loss: 0.0339 - val_rpn_bbox_loss: 0.4374 - val_mrcnn_class_loss: 0.0957 - val_mrcnn_bbox_loss: 0.4298 - val_mrcnn_mask_loss: 0.4451
Epoch 16/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.1188 - rpn_class_loss: 0.0225 - rpn_bbox_loss: 0.3289 - mrcnn_class_loss: 0.0786 - mrcnn_bbox_loss: 0.3092 - mrcnn_mask_loss: 0.3794 - val_loss: 1.4669 - val_rpn_class_loss: 0.0317 - val_rpn_bbox_loss: 0.4921 - val_mrcnn_class_loss: 0.0678 - val_mrcnn_bbox_loss: 0.4305 - val_mrcnn_mask_loss: 0.4449
Epoch 17/100
100/100 [==============================] - 143s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.0634 - rpn_class_loss: 0.0204 - rpn_bbox_loss: 0.3044 - mrcnn_class_loss: 0.0760 - mrcnn_bbox_loss: 0.2921 - mrcnn_mask_loss: 0.3706 - val_loss: 1.7327 - val_rpn_class_loss: 0.0409 - val_rpn_bbox_loss: 0.6916 - val_mrcnn_class_loss: 0.0660 - val_mrcnn_bbox_loss: 0.4462 - val_mrcnn_mask_loss: 0.4880
Epoch 18/100
100/100 [==============================] - 144s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.1480 - rpn_class_loss: 0.0208 - rpn_bbox_loss: 0.3440 - mrcnn_class_loss: 0.0744 - mrcnn_bbox_loss: 0.3176 - mrcnn_mask_loss: 0.3913 - val_loss: 1.3638 - val_rpn_class_loss: 0.0316 - val_rpn_bbox_loss: 0.4431 - val_mrcnn_class_loss: 0.0867 - val_mrcnn_bbox_loss: 0.3802 - val_mrcnn_mask_loss: 0.4222
Epoch 19/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.0744 - rpn_class_loss: 0.0205 - rpn_bbox_loss: 0.2965 - mrcnn_class_loss: 0.0765 - mrcnn_bbox_loss: 0.3001 - mrcnn_mask_loss: 0.3808 - val_loss: 1.5469 - val_rpn_class_loss: 0.0372 - val_rpn_bbox_loss: 0.6290 - val_mrcnn_class_loss: 0.0816 - val_mrcnn_bbox_loss: 0.3778 - val_mrcnn_mask_loss: 0.4213
Epoch 20/100
100/100 [==============================] - 143s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.0991 - rpn_class_loss: 0.0211 - rpn_bbox_loss: 0.3087 - mrcnn_class_loss: 0.0763 - mrcnn_bbox_loss: 0.3046 - mrcnn_mask_loss: 0.3884 - val_loss: 1.4762 - val_rpn_class_loss: 0.0466 - val_rpn_bbox_loss: 0.4579 - val_mrcnn_class_loss: 0.1011 - val_mrcnn_bbox_loss: 0.4206 - val_mrcnn_mask_loss: 0.4500
Epoch 21/100
100/100 [==============================] - 144s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.0291 - rpn_class_loss: 0.0184 - rpn_bbox_loss: 0.2935 - mrcnn_class_loss: 0.0747 - mrcnn_bbox_loss: 0.2844 - mrcnn_mask_loss: 0.3581 - val_loss: 1.5576 - val_rpn_class_loss: 0.0311 - val_rpn_bbox_loss: 0.5574 - val_mrcnn_class_loss: 0.0880 - val_mrcnn_bbox_loss: 0.4323 - val_mrcnn_mask_loss: 0.4487
Epoch 22/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.0309 - rpn_class_loss: 0.0186 - rpn_bbox_loss: 0.2870 - mrcnn_class_loss: 0.0706 - mrcnn_bbox_loss: 0.2827 - mrcnn_mask_loss: 0.3720 - val_loss: 1.4791 - val_rpn_class_loss: 0.0418 - val_rpn_bbox_loss: 0.5242 - val_mrcnn_class_loss: 0.0811 - val_mrcnn_bbox_loss: 0.3982 - val_mrcnn_mask_loss: 0.4338
Epoch 23/100
100/100 [==============================] - 143s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 0.9465 - rpn_class_loss: 0.0146 - rpn_bbox_loss: 0.2650 - mrcnn_class_loss: 0.0670 - mrcnn_bbox_loss: 0.2551 - mrcnn_mask_loss: 0.3449 - val_loss: 1.5083 - val_rpn_class_loss: 0.0387 - val_rpn_bbox_loss: 0.5340 - val_mrcnn_class_loss: 0.0863 - val_mrcnn_bbox_loss: 0.3892 - val_mrcnn_mask_loss: 0.4602
Epoch 24/100
100/100 [==============================] - 145s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 0.9769 - rpn_class_loss: 0.0173 - rpn_bbox_loss: 0.2784 - mrcnn_class_loss: 0.0643 - mrcnn_bbox_loss: 0.2657 - mrcnn_mask_loss: 0.3511 - val_loss: 1.5453 - val_rpn_class_loss: 0.0472 - val_rpn_bbox_loss: 0.5884 - val_mrcnn_class_loss: 0.0795 - val_mrcnn_bbox_loss: 0.4167 - val_mrcnn_mask_loss: 0.4135
Epoch 25/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 0.9396 - rpn_class_loss: 0.0163 - rpn_bbox_loss: 0.2633 - mrcnn_class_loss: 0.0615 - mrcnn_bbox_loss: 0.2560 - mrcnn_mask_loss: 0.3426 - val_loss: 1.4874 - val_rpn_class_loss: 0.0476 - val_rpn_bbox_loss: 0.4904 - val_mrcnn_class_loss: 0.1297 - val_mrcnn_bbox_loss: 0.3965 - val_mrcnn_mask_loss: 0.4232
Epoch 26/100
100/100 [==============================] - 143s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 0.9716 - rpn_class_loss: 0.0166 - rpn_bbox_loss: 0.2744 - mrcnn_class_loss: 0.0667 - mrcnn_bbox_loss: 0.2639 - mrcnn_mask_loss: 0.3501 - val_loss: 1.5327 - val_rpn_class_loss: 0.0460 - val_rpn_bbox_loss: 0.5641 - val_mrcnn_class_loss: 0.1009 - val_mrcnn_bbox_loss: 0.4079 - val_mrcnn_mask_loss: 0.4137
Epoch 27/100
100/100 [==============================] - 144s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 1.0123 - rpn_class_loss: 0.0184 - rpn_bbox_loss: 0.2841 - mrcnn_class_loss: 0.0828 - mrcnn_bbox_loss: 0.2707 - mrcnn_mask_loss: 0.3563 - val_loss: 1.4170 - val_rpn_class_loss: 0.0317 - val_rpn_bbox_loss: 0.4801 - val_mrcnn_class_loss: 0.0923 - val_mrcnn_bbox_loss: 0.3865 - val_mrcnn_mask_loss: 0.4265
Epoch 28/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 0.9857 - rpn_class_loss: 0.0171 - rpn_bbox_loss: 0.2943 - mrcnn_class_loss: 0.0661 - mrcnn_bbox_loss: 0.2567 - mrcnn_mask_loss: 0.3516 - val_loss: 1.6045 - val_rpn_class_loss: 0.0380 - val_rpn_bbox_loss: 0.6388 - val_mrcnn_class_loss: 0.0789 - val_mrcnn_bbox_loss: 0.4338 - val_mrcnn_mask_loss: 0.4150
Epoch 29/100
100/100 [==============================] - 142s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 0.9259 - rpn_class_loss: 0.0164 - rpn_bbox_loss: 0.2457 - mrcnn_class_loss: 0.0704 - mrcnn_bbox_loss: 0.2472 - mrcnn_mask_loss: 0.3463 - val_loss: 1.5129 - val_rpn_class_loss: 0.0468 - val_rpn_bbox_loss: 0.4469 - val_mrcnn_class_loss: 0.1258 - val_mrcnn_bbox_loss: 0.4158 - val_mrcnn_mask_loss: 0.4775
Epoch 30/100
100/100 [==============================] - 144s 1s/step - batch: 49.5000 - size: 4.0000 - loss: 0.9057 - rpn_class_loss: 0.0159 - rpn_bbox_loss: 0.2516 - mrcnn_class_loss: 0.0610 - mrcnn_bbox_loss: 0.2383 - mrcnn_mask_loss: 0.3389 - val_loss: 1.4020 - val_rpn_class_loss: 0.0342 - val_rpn_bbox_loss: 0.5400 - val_mrcnn_class_loss: 0.0640 - val_mrcnn_bbox_loss: 0.3719 - val_mrcnn_mask_loss: 0.3918

In my case, the best validation loss value obtained is at epoch=12, value=1.342 (rounded off) hence my model is saved as mask_rcnn_model.012-1.342155.h5! Hope it helps! Cheers!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Disable checkpointing in Trainer - Hugging Face Forums
To disable checkpointing, what I currently do is set save_steps to some large ... Trainer option to disable saving DeepSpeed checkpoints.
Read more >
Is there a way to disable saving to checkpoints for Jupyter ...
You can uncheck Settings -> AutosaveDocuments to avoid autosave file, but it always create .ipynb_checkpoints folder when you open a file, I can ......
Read more >
How to disable checkpoints? · Issue #10394 - GitHub
INFO:tensorflow:Saving checkpoints for 1 into C:\Users\home\AppData\Local\Temp\tmprit6vryq\model.ckpt. ... INFO:tensorflow:Loss for final step: ...
Read more >
Saving Checkpoints during Training - PyKEEN - Read the Docs
When saving checkpoints due to failure of the training loop there is no guarantee that all random states can be recovered correctly, which...
Read more >
A Guide To Using Checkpoints — Ray 2.2.0
The experiment-level checkpoint is saved by the driver. The frequency at which it is conducted is automatically adjusted so that at least 95%...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found