-
-
Notifications
You must be signed in to change notification settings - Fork 33
How it works
BACK > Home
This page explains how the functionality behind kMotion works.
Data diagram for kMotion.
If the user wishes to use Motion Blur, Volumes referencing Volume Profiles with the Motion Blur Volume Component are added to the scene. These are collected by the VolumeStack
and can be accessed by the MotionRendererFeature
.
The MotionRendererFeature
controls rendering motion vectors and, optionally, motion blur. These are performed with two separate ScriptableRenderPass
classes. Motion vectors are rendered using MotionVectorRenderPass
which runs at the render pass event AfterRenderingOpaques
, and motion blur is rendered using MotionBlurRenderPass
which runs at AfterRenderingPostProcessing
. Ideally this render pass would run part-way through the post-processing stack, after Depth of Field, however there is no render pass event available at this point.
But before enqueuing passes we must do some setup. For each Camera, the MotionRendererFeature
calculates a MotionData
instance. These are pooled in a Dictionary
and used to track persistent camera motion data, such as frame counts and previous frame matrices, between frames. This data is sent to the MotionVectorRenderPass
and used for rendering motion vectors.
Before rendering motion vectors, the MotionVectorRenderPass
must be configured. This involves creating the render target handle for _MotionVectorTexture
, configuring and clearing it, then setting it as the active render target. Finally the previous frame view projection matrix from the Camera's MotionData
is set as a global shader variable _PrevViewProjMatrix
and the Camera's depth texture mode is set correctly for motion vectors.
Camera Motion Vectors.
Next motion vectors are rendered for the camera. This involves drawing a procedural fullscreen quad using a shader specific to rendering camera motion vectors. This shader multiplies the current world position by both the current, and previous frame's view projection matrices. Subtracting the latter from the former gives the velocity value for the camera, which is then written the motion vector texture.
Object Motion Vectors.
To render motion vectors for opaque objects we the regular ScriptableRenderContext.DrawRenderers
command, filtering for only opaque objects and using PerObjectData.MotionVectors
as the per-object data in the drawing settings. These objects are then drawn using another shader, specific to rendering per-object motion vectors. This shader is similar, however it uses the matrix unity_MatrixPreviousM
to transform the object space vertex positions to the previous frame's world position when calculating previous frame position in view projection space.
Motion Blur.
If a Motion Blur Volume Component is active in the VolumeStack
, next we render motion blur. The main color target is sampled twice, offset by the velocity values sampled from the motion vector texture and a sign value (1 for the first sample, -1 for the second), and these values are added to the color. This is then multiplied by the inverse of the sample count. Higher Quality values supplied by the Motion Blur Volume Component result in more samples (2, 3 and 4).