Is the use of `torch.manual_seed` in example training code correct?
See original GitHub issueDescribe the bug
This is a minor thing, but I think this should be torch.Generator().manual_seed(0)
. In my understanding, if torch.manual_seed
is called, it sets the seed globally and could cause unexpected side effect. I think it’s better not to change the global seed in the training loop.
Reproduction
No response
Logs
No response
System Info
diffusers==0.1.3 (current `main` branch `92b6dbba1a` too)
Issue Analytics
- State:
- Created a year ago
- Comments:10 (8 by maintainers)
Top Results From Across the Web
torch.manual_seed — PyTorch 1.13 documentation
Sets the seed for generating random numbers. Returns a torch.Generator object. Parameters: seed (int) – The desired seed. Value must be within ...
Read more >Random seeds and reproducible results in PyTorch
The following code snippet is a standard one that people use to obtain reproducible results in PyTorch. >>> import torch >>> random_seed =...
Read more >Ensuring Training Reproducibility in PyTorch | LearnOpenCV
If you liked this article and would like to download code (C++ and Python) and example images used in this post, please click...
Read more >Which seed when using `pytorch.manual_seed(seed)`?
So to get a seed in Python as a 64 bits integer, you could use code like this: import functools import os #...
Read more >How to Set Random Seeds in PyTorch and Tensorflow - Wandb
As many others have said about Deep Learning, most models aren't ... Following are code snippets you can readily use in your codebase...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hello @patrickvonplaten ! It seems that
torch.manual_seed(0)
is still called to set the generator in the unconditional training script.I noticed this when tweaking the script to be able to resume a training process from a checkpoint. My shuffled
Dataloader
produced the same mini batches in the same order from one training to another.related to this issue, pipelines do not pass the generator to the step function (see: https://github.com/huggingface/diffusers/blob/bfe37f31592a8fa4780833bf4e7fbe18fa9f866c/src/diffusers/pipelines/ddpm/pipeline_ddpm.py#L61) resulting in different evaluation outputs when not resetting the default generator. while the initial noise will use the passed in generator subsequent added noise in the step function will not.