Add `DrawGeometry` function to WebGLRenderer
See original GitHub issueHello!
THREE provides a really nice, quick approach for setting up a scene but there are times when the default render path isn’t ideal. I want to propose adding a function like DrawGeometry
to the WebGLRenderer API.
WebGLRenderer.DrawGeometry(<Geometry|BufferGeometry>, <Material>, <Matrix4>, <Camera>, <Scene>, <RenderTarget>, <Options>)
Draws the geometry to the render target immediately.
Geometry|BufferGeometry
The geometry to render.
Material
The material to render with.
Matrix4
The world matrix to transform the geometry.
Camera
The camera to render with.
Scene
The scene to render in the context of (for lighting, fog, etc).
RenderTarget
The target to render to.
Options
A set of draw options including things like frustumCulled
.
Use Cases
At a high level this would afford more flexibility to create custom render paths to suit different types of optimizations and scene complexity.
- In cases where you’re dealing with tens or hundreds of thousands of meshes in a scene it can be beneficial to save transform information in a minimal representation to save memory (an array of matrices, for example). This can’t be done all the easily at the moment because THREE requires Object3Ds and Meshes to render.
- If a set of visible geometry is already known (through a custom octree frustum culling implementation or precomputed visible sets) there’s no reason to have THREE iterate over all geometry in the scene and it can be much slower to do so. This lets you control what gets drawn and iterated over.
- Some effects are just convenient to implement with a custom draw and don’t need an object in the scene hierarchy.
- You may want to draw for only X milliseconds during a frame – this would let you measure how long you’ve spent drawing so far.
In a couple of these cases the solution at the moment is to remove everything (or almost everything) from the scene and rebuild it specifically so THREE doesn’t doesn’t spend time iterating over everything, which can take time in itself.
Thoughts?
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:19 (5 by maintainers)
Top GitHub Comments
Some other PRs I’ve submitted that I think are more in line with three.js patterns will address the issues I originally created this for which I’m hoping can be revisited once JSM and ES6 conversion have been addressed. I will open a new issue if this becomes a need again.
Thanks!
Yeah I think that’s some of the same problem I’m having, as well. But I don’t think you should have to manually call
objects.update( object )
to perform a draw. This is what I noticed when looking over the renderer code, some of which you mentioned:There’s a lot happening in
render
that could be happening inrenderBufferDirect
, such as initializing the geometry (and calling skeleton.update) and updating the modelView matrices.Shaders look like they’re fine and compiled as-needed without calling the compiled function because
initMaterial
is called if needed insetProgram
.One of the only benefits to calling compile ahead of time (other than to precompile shaders) is to set the currentRenderState variable to something, which
render
sets to null once its finished so it can’t be relied on.The reason the materials aren’t updating is because the renderer assumes that if the program is currently used then nothing needs to be updated and the
render
function resets the cache by setting the _currentCamera to null (which is checked in setProgram)I’d be interested in updating the renderer to allow for the types of operations I’m talking about. I understand that some or a lot of what I mentioned is there for the sake of optimization. Does this belong in a new function? Would any of this be affected by the new WebGL2Renderer?
Thoughts @mugen87? @mrdoob?
Thanks!