question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Artifacts compared to Lua version

See original GitHub issue

Coming back to this repo after a long break, interested in developing this further…

I was just comparing this implementation to the original Lua version and noticed something I hadn’t before. The two produce almost identical results, but the PyTorch version appears to produce very subtle artifacts.

The following is an example, using Hokusai as the style image. On the left side is neural-style (lua), on the right side is neural-style-pt.

ml_hk_lua copy

Notice in scattered places the presence of high-frequency discolorations, often in almost checkerboard-like patterns. These do not appear in the Lua version. If you zoom in on a few parts of the neural-style-pt versions, you can see them clearly. Notice the pink and green checkers.

Screen Shot 2020-05-22 at 4 27 22 PM Screen Shot 2020-05-22 at 4 27 42 PM Screen Shot 2020-05-22 at 4 39 16 PM

This generally happens consistently for any combination of content and style images, although for some style images the artifacts are more obvious. Sometimes obvious discolorations appear, other times they are smaller, giving the output an almost grainy appearance. The artifacts can be reduced by increasing -tv_weight but at the expense of content/style reconstruction, and even then it’s still visible.

I tried fixing it a few ways. Clamping the image between iterations (not just at the end) didn’t fix it. I tried playing with the TVLoss module. For example, changing

self.loss = self.strength * (torch.sum(torch.abs(self.x_diff)) + torch.sum(torch.abs(self.y_diff)))

to an L2-loss, i.e.

self.loss = self.strength * (torch.sum(torch.pow(torch.pow(self.x_diff, 2)) + torch.sum(torch.pow(self.y_diff, 2)), 2))

also did not get rid of the artifact (I tried this because my reading of the TVLoss formula is that they use L2-loss not absolute values. But I’m not sure this makes a big difference.

The artifact is very subtle, but I’m hoping to fix it, as I’d like to produce more print-quality images in the future, and multi-stage or multi-scaled techniques on top may amplify the artifact. I wonder if you have any idea what might be causing this or what could potentially fix it.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:2
  • Comments:23 (12 by maintainers)

github_iconTop GitHub Comments

2reactions
Sankyuubigancommented, May 25, 2021

Hi guys. I am not very good at your code, could you please provide the code of the neural_style.py file with ready changes maxpool2d or genekogan’s tv optimization? How to use this code?

1reaction
genekogancommented, Dec 11, 2020

One hacky idea I have that could help balance the tradeoff between checkerboard artifacts/high-freq noise (which seem to reduce with increased TV regularization) and muddy regions (which reduce with decreased TV regularization) would be to modify the TVLoss by first multiplying it element-wise with a saturation map (or something similar) of the image before summing it all together.

So instead of:

 self.loss = self.strength * (torch.sum(torch.abs(self.x_diff)) + torch.sum(torch.abs(self.y_diff)))

Something like:

X = torch.abs(self.x_diff)) + torch.sum(torch.abs(self.y_diff)
self.loss = self.strength * (torch.sum(z))

xd = torch.mul(S, torch.abs(self.x_diff))
yd = torch.mul(S, torch.abs(self.y_diff))
self.loss = self.strength * (torch.sum(xd) + torch.sum(yd))

where S is some measure of the local saturation around that pixel (or rather desaturation, since you want to increase the TV penalty around the desaturated regions). For example, S(x,y) could be inverse of the standard deviation in the m*n square block centered at x,y.

Or even simpler would be to use L2 sum instead of L1, i.e. square/raise each element in torch.abs(self.x_diff) and torch.abs(self.y_diff) to some power to penalize really big differences between pixels.

I’m not sure if there’s much to gain from trying to optimize TV noise, as the effect is pretty subtle, and maybe the artifacts aren’t even my biggest problem anymore, but some untrained/half-baked food for thought.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Lua [Kit + Artifact] : r/EpicSeven - Reddit
Bro, she has 9 less base speed than Ran. On equal gear, Ran has almost a garunteed chance to outspeed her. Not sure...
Read more >
change build process to download artifacts from lua binaries
In short, lua module depends on a specific version 5.3 of lua ... logically in a similar way we download luarocks. The LuaBinaries...
Read more >
gudzpoz/luajava: Lua for Java on Windows, Mac OS ... - GitHub
LuaJava. About; Platforms and Versions. Artifacts. Java module. Examples; More ... similar to package.loadlib; luaify : Convert Java values to Lua values ...
Read more >
The Anatomy of LuaJIT Tables and What's Special About Them
Lua has several implementations and several versions. In this article, I'm going to discuss mostly LuaJIT 2.1.0, which is used in Tarantool. Our ......
Read more >
How Roblox Makes Programming Beginner Friendly with Luau
Roblox Studio is made possible by a modified version of Lua, the lightweight multi-paradigm programming language commonly used by beginners.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found