(Major issue) Pixel integrity loss across all formats for both save and load
See original GitHub issueDescription
I have a single-channel linear grayscale image that contains a very low-octave (smooth) Perlin noise as float[] (range of 0…65535) which is converted to ImageMagick using SetPixelsUnsafe().SetPixels()
.
For the purposes of illustrating the gentle grades that are not perceivable to the human eye, I’ve shown the data as 3D heightfields.
When the image is saved, it shows loss of integrity in the form banding and moire patterns on the smooth structure. It happens regardless of format (PNG, TIFF, and even OpenEXR). PNG, etc can be lossy so that’s understandable - even PNG64, maybe even TIFF, but EXR is meant for high precision storage of floats.
I suspected that my source data must be bad, so I saved the float[] to a binary file, loaded it in a 3rd party application. The image looked clean. I saved an OpenEXR from that application of the same data, and it looked clean too. Then I loaded that EXR into IM and it had banding artifacts.
Original data (visualized in 3rd party application)
Saved by Magick.NET
Recursive integrity loss
But it gets worse. Each load/save pass of the same images loses data integrity further.
This image shows the clean EXR brought into Magick.NET. It immediately shows banding.
Then I save that EXR via Magick.NET to EXR, and load it back, the banding has become more intense.
To double-check, I loaded the data back into the 3rd party visualizer, and the banding has intensified indeed.
Problem is with File I/O, not with Magick ‘itself’
To rule this out, I loaded the clean float[] data into IM, then got the pixel data out as float[] to take to 3rd party application for confirmation, and it was clean. This leads me to believe that floats are being quantized, truncated, or somehow losing integrity while saving and loading. Directly pushed pixels that were generated on the fly retain 100% integrity.
I checked with both multi-channel and single-channel images. Problem persists in both. It also is the same whether using scRGB or LinearGray color spaces.
Steps to Reproduce
- Load any linear grayscale image or float[] data with clean, smooth shapes, and then save it to any format with Magick.NET.
- Load a clean EXR from elsewhere, save it via Magick.NET, and load it back.
Dozens (if not a hundred+) of my users have diligently done tests and confirmed this issue.
System Configuration
- Magick.NET version:
Magick.NET-Q16-HDRI-OpenMP-x64 7.14.3.0
- Environment (Operating system, version and so on):
Windows 10
- Additional information:
Has existed in the last ~6+ releases
Issues https://github.com/dlemstra/Magick.NET/issues/376 and https://github.com/dlemstra/Magick.NET/issues/479 MAY be stemming from this issue.
Issue Analytics
- State:
- Created 4 years ago
- Comments:12 (5 by maintainers)
Top GitHub Comments
When an 32-bit EXR file is loaded the values are converted from 32-bit floats to 16-bit floats and then back to 32-bit float inside ImageMagick. Then when the file is saved the 32-bit values are changed into 16-bit and saved to the file. Then when that file is loaded those 16-bit values are changed into 32-bit in ImageMagick and back to 16-bit when the image is saved to EXR. I think we keep are getting some information loss when those conversions happen.
I will not rewrite the encoder and decoder of EXR at this moment and maybe do that in the future. The fix for setting the depth of a MagickImage will be fixed the next release.
The pixels cache of MagickImage is 32-bit floats for the Q16-HDRI build so you are correct that the loss of integrity only happens when we pass the data to the encoders inside ImageMagick.
Tonight I focused mostly on EXR and it turns out that we can only read and write the image with half float (16-bit float) instead of a 32-bit float. We are using a C Api that only provides us with an RGBA version of the file. But with your test file (
Clean_Perlin.exr
) we only need to read theR
channel because that is the only channel that the image contains. There was an issue about reading 32-bit files in the openexr repository (https://github.com/openexr/openexr/issues/237) but it looks like they don’t want to add support for that at this moment. That issue contains a link to a document that describes how we could change our code to read specific channels instead. But that would mean we would need to do a rewrite of our EXR coder and that will not be a simple task.And for the tiff coder you will need to set the bit depth to 32 bit to make sure you are writing floats to the file. But the Magick.Native library currently does not allow you to specify a depth higher than the quantum depth so you cannot do that now I will need to make a patch for that in the Magick.Native library to make this possible and then publish a new release of Magick.NET. And I think you don’t need to use
MagickFormat.Tiff64
because that is for writingBIG
tiff files. But I am not sure about that.