Checkpoint key error when finetuning
See original GitHub issueI have converted stylegan2-ffhq-config-f.pkl from official repo to stylegan2-ffhq-config-f.pt using convert_weight.py, then converted my dataset with prepare_data.py.
After that I am running this command: python train.py --finetune_loc 2 --ckpt stylegan2-ffhq-config-f.pt ./data_processed/
to finetune on my dataset but getting error:
load model: stylegan2-ffhq-config-f.pt Traceback (most recent call last): File "train.py", line 439, in <module> generator.load_state_dict(ckpt["g"], strict=False) KeyError: 'g'
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Error finetuning from pretrained checkpoint · Issue #30 - GitHub
Hi all, I'm running into an error when trying to fine-tune from one of the pretrained checkpoints. Code !mkdir "$output" !wget -q -O ......
Read more >Unable to read from a tensorflow checkpoint for finetuning
But when I give the checkpoint of the pre-trained model, it is showing that "Key output_bias not found in checkpoint". I thought it...
Read more >KeyError: 'loss' when fine-tuning a Transformer model
I am trying to fine tune a transformer model on my own unlabeled corpus of text. My code for doing this is: from...
Read more >How to Checkpoint Deep Learning Models in Keras
In this post, you will discover how to checkpoint your deep learning models during training in Python using the Keras library.
Read more >Transfer learning and fine-tuning | TensorFlow Core
A key advantage of that second workflow is that you only run the base model once on your data, rather than once per...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi, the flags --gen and --disc are needed for the conversion.
https://github.com/bryandlee/FreezeG/blob/3b84a47eb190fc21ba13d5e26460e73a4a2af00b/stylegan2/convert_weight.py#L205-L206
Also, if you want to finetune from the converted checkpoint then you might need to convert the optimizer parameters as well. Check out the issue from the original repo: https://github.com/rosinality/stylegan2-pytorch/issues/105
Hi, I haven’t managed to train the 1024 model either. I guess the pytorch implementation of StyleGAN2 takes more gpu memories than the original one?