Using renderer.readRenderTargetPixels() inside a custom Pass
See original GitHub issueHi,
I’m trying to use renderer.readRenderTargetPixels()
inside the render()
function of a custom Pass to perform GPU Picking in order to update the FullscreenMaterial fragment shader with the picked values.
Since all the materials in my scene are rendered using basic vertex colors, there is no need to render the scene twice to get a separated WebGLRenderTarget to store the vertex colors values for picking.
The documentation mentions that the render()
function second parameter is the inputBuffer from the previous pass (In my case the previous pass is a RenderPass).
Since inputBuffer is a WebGLRenderTarget
, I was expecting to be able to perform renderer.readRenderTargetPixels(inputBuffer, x, y, 1, 1, readPixel)
inside the render function, but the returned pixels are always black.
Passing inputBuffer.texture
and the coordinates to the shader uniforms works as expected, so I don’t think the problem is related to this part.
And the GPU picking code works when I use a traditional setup by rendering the scene twice like so
const pickingTexture = new WebGLRenderTarget(doc.width, doc.height, {
minFilter: LinearFilter,
magFilter: NearestFilter,
format: RGBAFormat,
type: FloatType
});
with in the render code:
renderer.setRenderTarget(pickingTexture);
renderer.render(scene, camera);
renderer.setRenderTarget(null);
But this method requires to render the scene twice.
I’m confused, why does inputBuffer
returns only black values when used in renderer.readRenderTargetPixels
but not when passed to the shader?
Is it even possible to call renderer.readRenderTargetPixels
in the render()
function?
Here is the render function of my custom Pass for context:
render(renderer, inputBuffer, outputBuffer, deltaTime, stencilTest) {
const material = this.getFullscreenMaterial();
material.uniforms.tDiffuse.value = inputBuffer.texture; // The shader reads the pixel from the previous pass perfectly
const readPixel = new Float32Array(4);
const x = Math.floor((material.uniforms.position.value.x + 1) * (width / 2));
const y = Math.floor((material.uniforms.position.value.y + 1) * (height / 2));
renderer.readRenderTargetPixels(inputBuffer, x, y, 1, 1, readPixel);
console.log(readPixel); // Returns only [0., 0., 0., 0.]
renderer.setRenderTarget(this.renderToScreen ? null : outputBuffer);
renderer.render(this.scene, this.camera);
}
Any input is appreciated, thanks!
Issue Analytics
- State:
- Created 3 years ago
- Comments:14 (13 by maintainers)
Top GitHub Comments
I was stuck conceptually for a little bit but I think I figured it out. The critical step in raycasting from the camera via depth buffer read is in taking that depth and converting the location of the pixel into NDC to get all 3 coordinates and unprojecting that with the camera, which three.js provides a method for and is exactly the same interface (you have to give three’s Raycaster NDC x and y).
It certainly seems that
perspectiveDepthToViewZ
is doing the portion of this work that doesn’t account for the X and Y values. it will produce the eye space Z coordinate, and this is fine as an approximation for distance from the camera, when near the center of the frustum, it doesn’t actually capture the actual distance when off center!That was my problem, I didn’t change the frame buffer type of the composer! I solved the problem with
Now
renderer.readRenderTargetPixels()
in the the pass render function works as expected. Thank you for the quick reply, and bravo for the library, pleasant to use and really well designed. Keep up the great work!