Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebGLRenderTarget: Allow opt-in for output transforms / image formation #29429

Open
donmccurdy opened this issue Sep 17, 2024 · 6 comments
Open

Comments

@donmccurdy
Copy link
Collaborator

donmccurdy commented Sep 17, 2024

Description

In recent three.js versions,renderer.outputColorSpace, render.toneMapping, and renderer.toneMappingExposure (jointly, the image formation step) are applied only when writing to the canvas drawing buffer (or WebXR) and disabled when writing to a render target. I think that's the right default. But it does constrain users' choices, and can force an additional (expensive!) pass in a post-processing stack. Based on feedback, this is problematic for mobile applications.

Solution

I'd like to suggest that we add a new property to WebGLRenderTarget, perhaps called target.needsOutputTransform. If enabled, WebGLRenderer would apply ...

when drawing to that render target. I believe renderer.outputColorSpace is not relevant here, since the render target already has a .colorSpace property.

Alternatives

Alternatively, we could make WebGLRenderer always apply these settings when drawing to a render target, and require explicit changes to the renderer's properties by the user. I expect this would be a significant breaking change, however.

Additional context

I don't know what this looks like in WebGPURenderer, or whether similar changes would be required there.

/cc @WestLangley

@donmccurdy
Copy link
Collaborator Author

donmccurdy commented Sep 20, 2024

I believe renderer.outputColorSpace is not relevant here, since the render target already has a .colorSpace property.

Hm. If the user's intention is to force blending in a particular color space like sRGB (building a 2D compositing application, for example) then expressing that in terms of the render target's color space won't be correct. WebGL will decode an sRGB framebuffer to linear when doing blending, it blends in linear when writing to both linear and sRGB framebuffers. In contrast to writing to the drawing buffer, where it blends in sRGB. So instead, the custom blending space use case might require:

renderer.outputColorSpace = SRGBColorSpace;

const target = new WebGLRenderTarget(width, height, {
  type: HalfFloatType,
  colorSpace: NoColorSpace, // ???
  needsOutputTransform: true
});

I don't love that; this needs some thought.

Context: https://discourse.threejs.org/t/wrong-colors-in-transparent-materials-when-using-effectcomposer-post-processing/70895

@donmccurdy
Copy link
Collaborator Author

donmccurdy commented Oct 2, 2024

Thinking through implications.

renderer.outputColorSpace rtt.colorSpace rtt.needsOutputTransform rtt contents blending space
sRGB * false Linear-sRGB Linear-sRGB
sRGB sRGB true sRGB Linear-sRGB
sRGB Linear-sRGB true Linear-sRGB Linear-sRGB
sRGB NoColorSpace ⚠️ true sRGB ⚠️ sRGB ⚠️
Linear-sRGB * false Linear-sRGB Linear-sRGB
Linear-sRGB sRGB true sRGB Linear-sRGB
Linear-sRGB Linear-sRGB true Linear-sRGB Linear-sRGB
Linear-sRGB NoColorSpace true Linear-sRGB Linear-sRGB

The problem with using NoColorSpace to defer to the output color space and force WebGL to blend in that space (e.g. sRGB) is that then only the author knows the contents are sRGB, and the default decoding won't happen, and so the render target will need a lot of special handling. Not to mention, we generally want for NoColorSpace to mean "skip conversions" and this would do the opposite.

So, unfortunately, I don't see a way to allow users to do blending in sRGB space for artistic reasons, without abandoning a linear workflow. Which we do allow, but isn't generally recommended.

With that my previous comment can be ignored, and we're back to what I'd suggested in the OP:

I believe renderer.outputColorSpace is not relevant here, since the render target already has a .colorSpace property.

Another possible API:

renderer.setRenderTarget( target, undefined, undefined, needsToneMapping = true );

@gkjohnson
Copy link
Collaborator

gkjohnson commented Oct 2, 2024

But it does constrain users' choices, and can force an additional (expensive!) pass in a post-processing stack. Based on feedback, this is problematic for mobile applications.

Do you have an example of when specifically this is a problem? Is there a case where you want a render target to have tone mapping etc applied? I've never had this issue and using tone mapping etc when writing to render targets shouldn't be the default, imo.

The way I see it is all these effects should be applied when the image is displayed. When performing forward rendering to the canvas this will happen in whatever surface shader is being used to render the objects. When using a postprocessing pipeline of some sort then these transformations need to be applied whenever the last postprocessing step is written to the canvas. There's no reason it has to be a whole separate pass - that's just how three.js' example processing pipeline is set up. The pmndrs/postprocessing project merges all postprocessing code it can into a single shader, if I recall right, in order to avoid expensive extra passes. Adding a flag to RenderTarget to avoid an extra post processing pass feels like a heavy handed fix to a fixable flaw in the three.js postprocssing pipeline, unless there are other cases I'm not thinking of.

edit: I missed that you linked to a discourse post and is specifically a transparency issue - I'll take a look at that in a bit

@gkjohnson
Copy link
Collaborator

gkjohnson commented Oct 4, 2024

Context: https://discourse.threejs.org/t/wrong-colors-in-transparent-materials-when-using-effectcomposer-post-processing/70895

This is a pretty annoying case 😅 Unfortunately, as you've pointed out, it doesn't seem like there's a way to easily control the blending color space in WebGL. The only thing I can think of for enabling a toggle while using linear color rendering is to go back to manually handling all color transformations manually and otherwise storing all data in buffers as sRGB, which has a lot of other implications and is a lot to maintain.

I'm hoping unmanaged color can be used to handle this case where users need to match behavior from other applications (eg designers requests) but there are some changes needed it looks like. Otherwise I'd think you might need a custom composite step when rendering transparent objects where you perform the blending calculations manually in the desired color space.

@donmccurdy
Copy link
Collaborator Author

donmccurdy commented Oct 4, 2024

I'm starting to feel that solutions to "support alpha blending in sRGB space" (if we choose to support it) will need to be different from the proposed WebGLRenderTarget changes. Notably, disabling color management (even if changed as proposed in #29549) may not be enough, as I think the OP in the Discord thread would like to keep a proper linear rendering workflow, but also to blend in sRGB space. If they didn't want a linear lit rendering workflow and are just concerned with blending, they could force an sRGB working color space for compositing, but with the linear rendering requirement I worry it's too rare and hard to support.

Adding a flag to RenderTarget to avoid an extra post processing pass feels like a heavy handed fix to a fixable flaw in the three.js postprocssing pipeline, unless there are other cases I'm not thinking of.

This has been coming up a lot in the pmndrs discord, let me see if I can get a more concrete example. I think it is mainly a performance problem.

@gkjohnson
Copy link
Collaborator

gkjohnson commented Oct 4, 2024

as I think the OP in the Discord thread would like to keep a proper linear rendering workflow, but also to blend in sRGB space.

It wasn't clear to me if that's what they wanted. From the thread they say the following:

I’m using THREE.ColorManagement.enabled = true; and renderer.outputColorSpace = THREE.SRGBColorSpace; as recommended in the color management guide (which by the way I need to be set exactly like that so the colors are correctly represented from their hex values in the other scenes not affected by the EffectComposer).

"opting out" of color management should not effect the end look of hex colors at all, which makes me think something else is going on - like one of the issues relating to textures mentioned in #29549 (comment). The color space docs page they reference also mentions nothing about having to set "outputColorSpace" to linear in order to retain CSS colors. Definitely agreed, though, that if linear rendering is required then that + sRGB blending isn't something we can easily support.

This has been coming up a lot in the pmndrs discord, let me see if I can get a more concrete example. I think it is mainly a performance problem.

I'll be curious to hear. I'm not sure what the solution you proposed provides in terms of performance over what can already be done by combining passes into a single shader, though (eg making sure the final post processing pass includes tonemapping, etc fragments). I expect in part postprocessing is just more expensive than people realize especially on mobile with high pixel ratios.

Relating to color spaces - I wonder what kind of impact using "SRGB8_ALPHA8" and floating point buffers has on mobile performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants