question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error during training (Assertion `input_val >= zero && input_val <= one` failed.)

See original GitHub issue

thank your contribution, I also encountered some problems when using this project, i need some suggestion, I use yolox-tiny to train my own VOC data, batch_size: 32, gpu_num:2, img_size:[224x224], an error occurs when the training reaches the 30th-40th epoch, the error message is as follows, I think it is not a memory overflow problem, because My picture is very small, I used yolox-S to train the batch_size to be larger, and the picture size is larger without this problem:

/pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [0,0,0] Assertion **input_val >= zero && input_val <= one failed**. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [1,0,0] Assertion **input_val >= zero && input_val <= onefailed.** /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [2,0,0] Assertioninput_val >= zero && input_val <= onefailed.** /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [3,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [4,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [5,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [6,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [7,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [8,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [9,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [10,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [11,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [12,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [13,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [14,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [15,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [16,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [17,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [18,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [19,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [20,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [21,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [22,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [23,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [24,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [25,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [26,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [27,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [28,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [29,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [30,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [31,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [32,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [33,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [34,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [35,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [36,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [37,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [38,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [39,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [40,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [41,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [42,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [43,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [44,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [45,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [46,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [47,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [48,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [49,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [50,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [51,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [52,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [53,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [54,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [55,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [56,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [57,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [58,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [59,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [60,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [61,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [62,0,0] Assertioninput_val >= zero && input_val <= onefailed. /pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [62,0,0], thread: [63,0,0] Assertioninput_val >= zero && input_val <= one` failed. [W CUDAGuardImpl.h:112] Warning: CUDA warning: device-side assert triggered (function destroyEvent) terminate called after throwing an instance of ‘c10::CUDAError’ what(): CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Exception raised from create_event_internal at …/c10/cuda/CUDACachingAllocator.cpp:1055 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f443a744a22 in /home/ailab/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: <unknown function> + 0x10983 (0x7f443a9a5983 in /home/ailab/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libc10_cuda.so te-packages/torch/lib/libc10_cuda.so) frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void) + 0x1a7 (0x7f443a9a7027 in /home/ailab/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #3: c10::TensorImpl::release_resources() + 0x54 (0x7f443a72e5a4 in /home/ailab/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libc10.so) frame #4: std::vector<c10d::Reducer::Bucket, std::allocatorc10d::Reducer::Bucket >::~vector() + 0x2f9 (0x7f44915f7199 in /home/ailab/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: c10d::Reducer::~Reducer() + 0x276 (0x7f44915edbc6 in /home/ailab/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #6: std::_Sp_counted_ptr<c10d::Reducer, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7f449161d882 in /home/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libtorch_python.so)

Traceback (most recent call last): File “train.py”, line 135, in <module> args=(exp, args), File “/media/E/yolox/core/launch.py”, line 95, in launch start_method=start_method, File “/home/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 188, in start_processes while not context.join(): File “/home/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 136, in join signal_name=name torch.multiprocessing.spawn.ProcessExitedException: process 1 terminated with signal SIGABRT

frame #7: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7f4490d675c6 in /home/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #8: std::_Sp_counted_ptr<c10d::Logger*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x1d (0x7f449162259d in /home/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7f4490d675c6 in /home/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #10: <unknown function> + 0xdaf07f (0x7f449162007f in /home/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #11: <unknown function> + 0x4ff188 (0x7f4490d70188 in /home/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: <unknown function> + 0x50048e (0x7f4490d7148e in /home/anaconda3/envs/yolox/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #13: <unknown function> + 0xfc197 (0x557df8e1e197 in /home/anaconda3/envs/yolox/bin/python) frame #14: <unknown function> + 0x1817b6 (0x557df8ea37b6 in /home/anaconda3/envs/yolox/bin/python) frame #15: <unknown function> + 0xfc1`

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:7

github_iconTop GitHub Comments

10reactions
DacDinh147commented, Oct 21, 2021

@cena001plus I got the same problem. I try to set the lower learning rate to avoid this problem. I found out this is because the model outputs Nan value in prediction head. You can print out the value of bbox_preds or obj_preds or use torch.isnan(x).sum().item() condition to print out the error . I pass !export CUDA_LAUNCH_BLOCKING=1; python train.py ... to debug this. Hope this will help you, I do not know why but the model is quite unstable, they should normalize it some where in prediction head to keep it more stable. ps: This logging error informs you that the Nan value in output prediction head can not be used to calculate the bce in this excerpt

with torch.cuda.amp.autocast(enabled=False):
            cls_preds_ = (
                cls_preds_.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
                * obj_preds_.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
            )
            pair_wise_cls_loss = F.binary_cross_entropy(
                cls_preds_.sqrt_(), gt_cls_per_image, reduction="none"
            ).sum(-1)
6reactions
cena001pluscommented, Oct 21, 2021

@cena001plus I got the same problem. I try to set the lower learning rate to avoid this problem. I found out this is because the model outputs Nan value in prediction head. You can print out the value of bbox_preds or obj_preds or use torch.isnan(x).sum().item() condition to print out the error . I pass !export CUDA_LAUNCH_BLOCKING=1; python train.py ... to debug this. Hope this will help you, I do not know why but the model is quite unstable, they should normalize it some where in prediction head to keep it more stable. ps: This logging error informs you that the Nan value in output prediction head can not be used to calculate the bce in this excerpt

with torch.cuda.amp.autocast(enabled=False):
            cls_preds_ = (
                cls_preds_.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
                * obj_preds_.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
            )
            pair_wise_cls_loss = F.binary_cross_entropy(
                cls_preds_.sqrt_(), gt_cls_per_image, reduction="none"
            ).sum(-1)

Try to reduce the learning rate, the problem is solved, thank you very much。

Read more comments on GitHub >

github_iconTop Results From Across the Web

Assertion `input_val >= zero && input_val <= one` failed
Hi, all Recently, I changed the cpu and motherboard of my PC. But when I tried to run the training code, I encountered...
Read more >
CUDA error: device-side assert triggered on loss function ...
There might be two reasons of the error: As the log says input_val is not between the range [0; 1]. So you should...
Read more >
Nextflow training
Any process can define one or more channels as an input and output . The interaction between these processes, and ultimately the pipeline...
Read more >
Untitled
Testcase Class per Class (617): We put all the Test Methods for one SUT class ... Unfinished Test Assertion (494): We ensure that...
Read more >
Nextflow Workshop 2018
1. Exercise. Enable the Docker execution by default adding the above setting in the nextflow.config file. 2.2.2 ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found