LosslessCompress routine is making some PNGs grayscale
See original GitHub issuePrerequisites
- I have written a descriptive issue title
- I have verified that I am using the latest version of Magick.NET
- I have searched open and closed issues to ensure it has not already been reported
I just saw there is a new version of Magick.NET. I am going to update to the latest version of Magick.NET-Q16-AnyCPU
now and see if we get another repro
Description
The LosslessCompress is intermittently destroying a set of PNGs
I took a look at the before and after for a specific image and noticed the alpha channel is lost and the color goes from RGB to grayscale after optimization. Here’s a screen grab of before and after comparison
I ran the same compression code in a loop on my laptop hundreds of times and was unable to reproduce the error. Is there anything you can think of that may be causing this? I am at a loss.
Here are the known repros of the issue:
https://github.com/nextcloud/nextcloud.com/pull/932 https://github.com/nextcloud/nextcloud.com/pull/920 https://github.com/dependabot/dependabot.github.io/pull/102 https://github.com/spences10/blog.scottspence.me/pull/614
interestingly for the dependabot case
/images/blog/auto-merge-2.png
went from 97.98kb
-> 81.75kb
in https://github.com/dependabot/dependabot.github.io/pull/101 and then from 81.75kb
-> 10.03kb
in https://github.com/dependabot/dependabot.github.io/pull/102
This one had no alpha before or after, but the color space went from RGB and LCD to Gray
Steps to Reproduce
Run ImgBot’s call to LosslessCompress() see https://github.com/dabutvin/ImgBot/blob/master/CompressImagesFunction/CompressImages.cs#L120-L154
System Configuration
- Magick.NET version: 7.5.0.1
- Environment (Operating system, version and so on):
"os_name": "Windows Server 2016",
"os_build_lab_ex": "14393.2312.amd64fre.rs1_release(bryant).180609-2043",
"cores": 2
- Additional information: This is an Azure functions environment running in consumption mode
Issue Analytics
- State:
- Created 5 years ago
- Comments:13 (5 by maintainers)
Top GitHub Comments
I wonder what happens when you try this with all the images from one of the repos that fails and optimize them in parallel. And do that in a loop until we optimize one of those images too much. And when we have a reproducible situation we can enable debug logging to get more information.
👍 👍 👍 Awesome! Great news