Improving toonification result
See original GitHub issueHi,
I was wondering what can we do to improve the toonification result. I tested with Encoder Bootstrapping method, Using the following command :
python scripts/encoder_bootstrapping_inference.py --exp_dir=./toonify --model_1_checkpoint_path=./pretrained/restyle_psp_ffhq_encode.pt --model_2_checkpoint_path=./pretrained/restyle_psp_toonify.pt --data_path=./test/test_A --test_batch_size=1 --test_workers=1 --n_iters_per_batch=1
I get decent results, But would like to make it look more like the input image.
A sample of result I am getting.
Issue Analytics
- State:
- Created 2 years ago
- Comments:12 (7 by maintainers)
Top Results From Across the Web
Making Toonify Yourself | Justin Pinkney
All the thing needs to do is accept an image, run inference, and return the result. It's totally stateless so a good fit...
Read more >Toonification of real face images using PyTorch, Stylegan2 ...
We got best results when using the lower layers from the cartoon-faces model, the higher layers from the FFHQ model and a resolution...
Read more >Frequently Asked Questions - Toonify!
How do I get good results? The algorithm works best with high resolution images without much noise. Looking straight on to the camera...
Read more >Toonify Yourself | Hacker News
Those seem to get better results. ... The Toonify produces avatars which match the source photo in terms ... Hope it will improve...
Read more >Justin Pinkney on Twitter: "The model outputs 1024x1024 ...
I put up a prototype of the next generation of toonification: Toonify HD! ... The model outputs 1024x1024 images and can even improve...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I had a few minutes to play around with the code and I was able to make the changes. Since this is a quick hack, I’ll upload the file here so you can take a look. We were pretty much missing one line of code. I was curious how initializing with pSp would change the result, so I ran it on your input. Here is the result:
I hope the results are more what you’re looking for (the middle image is the toonified result). I’d say that this looks better than what ReStyle came up with so it’s nice to see that a small change can lead to some improvements on particular inputs. The result is similar to what pSp returned, but I think the results here are more colorful.
Here is the code: encoder_bootstrap_with_psp.txt
P.S. I am not particularly surprised by the results. As we mentioned in the paper, one step of pSp is typically better than one step of ReStyle. Therefore, pSp here seems to provide a better initialization than what we get with ReStyle’s FFHQ encoder. I’ll consider adding support for both models so people have more flexibility in the initialization.
Ok this makes sense because pSp uses
input_nc
of 3 and restyle uses 6. You should play around with how you load net1 and net2 and try to match the parameters accordingly. I apologize, but I will need to come back to this at a later time. If you wish, you can continue playing with it or wait a bit and hopefully I can come back to this soon.