Reconsider removal of inline sRGB decode
See original GitHub issueDecided to start a new issue rather than adding comments on the closed ones, hope that’s OK.
I’d like to request a reconsideration of the inline sRGB decode removal from #23129.
The WebGL1 fallback involves forcibly disabling mipmaps if EXT_sRGB is supported, or a costly CPU-side conversion if not - both of these are significant regressions in WebGL1 functionality.
As discussed over on #22689, it seems like WebGL1 support is still accepted as a current necessity. iOS is the main reason why this is still an important consideration - roughly 20% of iOS users are still not yet on iOS 15 when WebGL2 was first enabled by default (prior to that the implementation was so buggy to be pointless enabling anyway).
iOS 15 uses ANGLE for its WebGL2 implementation, but with the Metal backend which is pretty new - it is usable and works correctly for a fair amount of content, but there are known bugs and regressions even in the latest iOS 15.4. Therefore sticking with WebGL1, even on the latest iOS, is one practical way to avoid the worst implementation bugs whilst Apple’s WebGL2 implementation settles down and becomes more reliable.
I understand that doing sRGB decode in the shader leads to incorrect interpolation, and without using the correct sRGB internal formats, mipmap filtering will be incorrect. I think the change in #23129 will lead to worse practical effects for WebGL1 content, especially the CPU-side conversion.
Here are some example values:
8-bit sRGB value | [0,1] linear value | 8-bit linear (floating point) | 8-bit linear (rounded) | Linear -> 8-bit sRGB |
---|---|---|---|---|
7 | 0.002125 | 0.54 | 1 | 13 |
12 | 0.003677 | 0.94 | 1 | 13 |
17 | 0.005605 | 1.43 | 1 | 13 |
Essentially any pixel with an sRGB value in the range [7, 17] will end up as 1 in the CPU-converted texture, and be rounded to 13 when the shader encodes as sRGB for output (NB: I’ve assumed rounding here, whereas it looks like the CPU conversion uses Math.floor - the logic still stands but exact numbers may vary).
In my view this is likely to cause much more severe artefacts than the mathematically incorrect interpolation.
A sampler half-way between two pixels with sRGB values (7/255)
and (17/255)
should be computed in the sampler after the conversion of each sample to linear space - ie it should be 0.002125 * 0.5 + 0.005605 * 0.5 = 0.003865
. “Incorrect” linear blending would compute it as (7 * 0.5 + 17 * 0.5) / 255
, ie (12/255)
and then do sRGB conversion on that result, giving 0.003677. It’s not quite right but it’s not that bad.
The problem is the linear values are just an intermediate stage (need to be encoded back to sRGB for output anyway) and 8-bits is just not enough precision for storing them. In the shader as long as the precision isn’t lowp
you’d be guaranteed more precision for those intermediate results.
The overall question is what to do about sRGB stuff in WebGL1 contexts. One option is just to not support them - Unity only allows you to use “linear” lighting if WebGL1 is disabled, but that means both webgl1 and webgl2 contexts end up running in “gamma” mode with the mathematically-incorrect lighting calculations.
I think three.js’s previous behaviour of emulating in the shader post-sampling was reasonable, which gives mathematically-correct behaviour in WebGL2, plus a visually consistent fallback in WebGL1. What’s more as the intermediate values use sufficient precision, it wouldn’t introduce banding artefacts for a simple “passthrough” unlit shader, unlike the current CPU fallback.
WebGL1 + EXT_sRGB looks like a reasonable “correct on WebGL1” approach, but the lack of mipmapping is annoying. sRGB-correct mips could be computed on the CPU side but there’d be a performance cost. Perhaps the best balance is to only use EXT_sRGB in the case mipmaps are explicitly disabled, rather than forcing that choice on users.
Issue Analytics
- State:
- Created a year ago
- Reactions:2
- Comments:13 (11 by maintainers)
Top GitHub Comments
Yes. Right now, video texture still use inline decodes and also
texImage2D()
instead oftexStorage2D()
. Browser issues and problems with determining the dimensions of a video texture prevent a migration so far (see https://github.com/mrdoob/three.js/issues/23109#issuecomment-1004640157). But I guess when we dump WebGL 1 and the related fallbacks and code paths, we are going to reconsider this issue.In any event, #23129 is not going to be reverted so this issue can be closed.
As @donmccurdy mentioned in that issue - that example is designed to make this obvious, as it tests 0.5 interpolation between two vastly different pixel values. It really overstates the severity of the issue in general, and as I mention above it’s even spec-compliant to do the conversion post-filtering.
In practice if those intermediate values between two pixels are important then just increasing texture resolution will reduce the error introduced by post-filtering conversion. The precision errors introduced in the CPU fallback are likely to have a much more severe real-world impact.
It seems the settled opinion from #22689 that it’s too soon to remove WebGL1 support. I’m not suggesting removing use of SRGB8_ALPHA8 or EXT_sRGB where available, but just that the inline shader decode is a much better WebGL1 fallback than CPU-side lossy 8-bit conversion.
If the view of the project is that from r136 the WebGL1 output will be regressed in order to offer code simplifications for WebGL2, and that going forward more divergence can and should be expected, that’s certainly a reasonable policy but it is one that would involve us sticking at r136 for our projects until we’re prepared to give up on WebGL1.
[There are certain ecosystem issue with react-three-fiber packages that bump peer dependencies in patch versions leading to npm install usually requiring the latest three - it’s not a three.js issue, but does bring practical challenges to trying to stick to an early version right now.]