question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

when I changed the warmup steps from 300 to 30, an error occurred.

See original GitHub issue
Traceback (most recent call last):
  File "tools/train.py", line 92, in <module>
    main(args)
  File "tools/train.py", line 87, in main
    trainer.run(train_dataloader, val_dataloader, evaluator)
  File "/zzz/prj/nanodet/nanodet/trainer/trainer.py", line 123, in run
    self.warm_up(train_loader)
  File "/zzz/prj/nanodet/nanodet/trainer/trainer.py", line 191, in warm_up
    output, loss, loss_stats = self.run_step(model, batch)
  File "/zzz/prj/nanodet/nanodet/trainer/trainer.py", line 57, in run_step
    loss.backward()
  File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
wangyincommented, Nov 24, 2020

I ran into the similar issue. Solved it by add “requires_grad=True” in the three lines. https://github.com/RangiLyu/nanodet/blob/5a7a50108d4c6815fc4bbcc382d08c8679143449/nanodet/model/head/gfl_head.py#L306-L308

0reactions
RangiLyucommented, Nov 25, 2020

Thanks. This bug has fixed in commit a444d72

Read more comments on GitHub >

github_iconTop Results From Across the Web

In the context of Deep Learning, what is training warmup steps
This usually means that you use a very low learning rate for a set number of training steps (warmup steps). After your warmup...
Read more >
Training Tips for the Transformer Model
It is advisable to establish the largest possible batch size before starting the main and long training. 4.6. Learning Rate and Warmup Steps...
Read more >
Sylius: "cache:clear" timeout - Stack Overflow
The problem came from the cache:clear's warmup (probably because sylius has ... Cache will then be generated on your first visit to the...
Read more >
Mill - Spindle - Programs (Run-In, Warm-Up, Break-In)
Every Haas machine includes the warm-up program. Use this program if the spindle was not in operation for more than (4) days or...
Read more >
Set the default instance warmup for an Auto Scaling group
If instances are still warming up and the group scales out again, the instances are counted as part of the desired capacity for...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found