Decoupled Materials and Pluggable Pipelines #53
jeremyong-az
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Decoupled Materials and Pluggable Pipelines
Problem Statement
The value proposition of a game engine is usually predicated on two key axes:
The first axis is dictated by tooling quality and data architecture. Aside from empirically measuring the rate at which a team can assemble a world or simulation, there are specific properties of the engine we can examine. In particular, content velocity is a function of how well content creation scales across multiple contributors, the frequency of content breakage due to engine changes, and content modularity.
The second axis pertains to the runtime, with top-of-the-line engines excelling in how widely the content can be deployed (platform breath), the fidelity of the content, or both.
While O3DE should persist in improving the value proposition on both of these angles, there is a third axis which is arguably more critical than the others.
While other engines, like O3DE, enjoy various degrees of modularity, O3DE being expressly community driven has a particular incentive to prioritize extensibility. Achieving content creation velocity and deployment at scale may be possible for a single vertical, be it simulation, indie game development, VFX, architectural visualization, or even AAA development, but pursuing all verticals is untenable for any particular entity in the community. Not only would there be a bandwidth gap in tackling all use cases, there would be a significant knowledge gap as well. Broad-phase adoption requires that specialists in the industry are able to leverage O3DE to achieve their goals, on their terms.
We can apply this reasoning to motivate a future direction for the renderer. As opposed to creating an optimal pathway for a single use case, we should prioritize the ability for engine users to implement their own pathways. The Atom renderer has already made some good strides in this direction. The pass interface enables engine users and middleware authors to create plugins that inject into the main render pipeline, adding functionality as needed. For example, PopcornFX leverages this capability to add passes needed to render custom FX particles. How might Atom extend this idea further? The natural next step is to enable users to modify the main render pipeline itself.
As far as extensibility affordances, engine adopters are restricted primarily because materials are currently coupled to the main render pipeline. While it is possible to author a custom pipeline, doing so requires a rewrite or refactor of existing material types. This restriction is the primary objective of this proposal. In a success scenario, the following realistic user stories are possible, despite not being possible or difficult today:
In all the use cases above, we would like users to feel empowered to make changes necessary. Furthermore, we would like those changes to be forwards compatible with future engine upgrades in a best-effort manner. Finally, we would like these modifications to be easily distributable as gems, be they open-sourced or commercial.
Solution Sketch
The primary vehicle to deliver on the promise above is a subdivision of monolithic content. That is, a system should not need to handle content in totality. Concretely, the current main render pipeline handles all requirements needed to render an object. Enumerated, these requirements are as follows:
Some of these requirements are needed for a single consumer. For example, the final lighting evaluation is needed by the display mapper to perform final frame treatment and send the framebuffer to the display. Some of these requirements are needed by multiple consumers. For example, object displacement is needed to not only transfer the object from world to camera space, but to compute motion vectors and transfer the object from world to light space for each shadow casting light. Geometry handling would also need to feed into any future culling system.
The main idea is to define a material strictly in terms of how it transforms geometry and evaluates BRDF parameters. Opposite the material is the pipeline that renders the material. For example, the pipeline may ask the material “how should I transform this geometry for this view” in any way it wants, when it wants. The material answers this question by providing a shader function adhering to a particular function signature. As a free function, this function may be deployed by the pipeline in a number of contexts, including but not limited to a depth pass, shadow passes, motion vector passes, mesh shading, arbitrary compute shaders, and more. The pipeline is, of course, not required to implement these passes, or even ask the question “how do I handle geometry for this material” at all. Similarly, the material should provide shader functions to produce various BRDF parameters. Pipelines have the freedom to evaluate these functions in various contexts - a DXR shader, a forward lit shader, a gbuffer pass, and more. The integration between a material and the pipeline that leverages it occurs at build time, generally resulting in a number of synthesized and compiled shaders, depending on the pipeline’s topology. Each shader synthesized by linking various material shader functions with a pipeline shader has knowledge of any material resources needed (image and buffer data). The material system is required to produce a resource at runtime as needed (assigned texture, computed buffer values). The pipeline is responsible for binding resources (be it through direct bindings, bindless, or some other strategy).
Changes relative to the original Material Pipelines RFC
In September of 2021, an RFC describing these ideas was published and voted on by the O3DE Graphics & Audio SIG (seen here (https://github.com/o3de/sig-graphics-audio/blob/main/rfcs/rfc-prs-20210913-1.md)). Since publication however, bandwidth to pursue the implementation wasn’t readily available. The document as written before did not contemplate the prioritization of the Material Canvas node-editing tool, and suggested a more shader-centric approach to decoupling geometry, materials, and lighting. At this point, the recommendation is to focus on shader synthesis as the primary mechanism to decouple the various subsystems, as opposed to needing direct changes to the shader compiler.
Beta Was this translation helpful? Give feedback.
All reactions