[RFC] How do we want to deal with images that include alpha channels?
See original GitHub issueThis discussion started in https://github.com/pytorch/vision/pull/5500#discussion_r816503203 and @vfdev-5 and I continued offline.
PIL as well as our image reading functions support RGBA images
but our color transformations currently only support RGB images ignoring an extra alpha channel. This leads to wrong results. One thing that we agreed upon is that these transforms should fail if anything but 3 channels is detected.
Still, some datasets include non-RGB images so we need to deal with this for a smooth UX. Previously we implicitly converted every image to RGB before returning it from a dataset
Since we no longer decode images in the datasets, we need to provide a solution for the users here. I currently see two possible options:
-
We could deal with this on a per-image basis within the dataset. For example, the train split of ImageNet contains a single RGBA image. We could simply perform an appropriate conversion for irregular image modes in the dataset so this issue is abstracted away from the user.
tensorflow-datasets
uses this approach: https://github.com/tensorflow/datasets/blob/a1caff379ed3164849fdefd147473f72a22d3fa7/tensorflow_datasets/image_classification/imagenet.py#L105-L131 -
The most common non-RGB image in datasets are grayscale images. For example, the train split of ImageNet contains 19970 grayscale images. Thus, the users will need a
transforms.ConvertImageColorSpace("rgb")
in most cases anyway. If that would support RGBA to RGB conversions the problem would also be solved. The conversion happens with this formula:pixel_new = (1 - alpha) * background + alpha * pixel_old
where
pixel_{old|new}
is a single value from a color channel. Since we don’t knowbackground
we need to either make assumptions or require the user to provide a value for it. I’d wager a guess that in 99% of the cases the background is white. i.e.background == 1
, but we can’t be sure about that.Another issue with this is that the user has no option to set the background on a per-image basis in the transforms pipeline if that is needed.
In special case for
alpha == 1
everywhere, the equation above simplifies topixel_new = pixel_old
which is equivalent to stripping the alpha channel. We could check for that and only perform the RGBA to RGB transform if the condition holds or the user supplies a background color.
Issue Analytics
- State:
- Created 2 years ago
- Comments:14 (3 by maintainers)
@pmeier You mean rgba2rgb method: https://github.com/python-pillow/Pillow/blob/c58d2817bc891c26e6b8098b8909c0eb2e7ce61b/src/libImaging/Convert.c#L443-L453 right ?
How do you see that it assumes white background ? From the implementation we can say that pointers
in
andout
has 4 values per pixel. By the lastin++
we ignore the forth value and by*out++ = 255;
we set output forth value as 255.Pixel size is 4 for RGB: https://github.com/python-pillow/Pillow/blob/95cff6e959bb3c37848158ed2145d49d49806a31/src/libImaging/Storage.c#L125-L129
In PIL docs they say to use dithering for conversion like RGB to P or 1 https://github.com/python-pillow/Pillow/blob/92c26a77ca53a2bfbd8804f009c6c8755d0e5a43/src/PIL/Image.py#L921-L922 same for the code:
https://github.com/python-pillow/Pillow/blob/c58d2817bc891c26e6b8098b8909c0eb2e7ce61b/src/libImaging/Convert.c#L1572-L1574
https://github.com/python-pillow/Pillow/blob/c58d2817bc891c26e6b8098b8909c0eb2e7ce61b/src/libImaging/Convert.c#L1608-L1610
After some offline discussion, we decided to align with PIL for now. The only difference should be that we should fail the transformation if the alpha channel is not the max value everywhere. This way we can implement the correct conversion as detailed in my top comment later without worrying about BC.