Skip to main content
Version: 21 Aug 2024

Improve Visual Stability in OpenXR

note

For the Magic Leap Native API (C-API) article about visual stability, see Improve Visual Stability (C-API).

Visual stability refers to how well content appears to be anchored to the real world as the user moves while wearing an augmented reality device.

The Magic Leap 2 device renders the left and right views for the user’s eyes using a predicted headpose. However, significant latency occurs between rendering and displaying the image. To reduce artifacts, after rendering and just before display, the image is reprojected in order to warp the rendered image to the latest headpose prediction. This improves how well the content appears to be anchored to the real world as the headpose changes.

Developers can improve visual stability for display and capture by providing additional information for reprojection.

Reprojection for Display

The default, the Magic Leap 2 device uses planar reprojection.

In many simple cases, setting the focus distance is enough to provide good visual stability. The focus distance is the distance from the user to the orthogonal reprojection plane, in meters. You can set the focus distance using the XR_ML_frame_end_info OpenXR extension.

This illustration shows the relationships between the reprojection plane, the focus distance, and the user's field of view.

For more complex reprojection scenes, you can use the XR_MSFT_composition_layer_reprojection OpenXR extension. XR_MSFT_composition_layer_reprojection includes more advanced options and can provide better results.

The matrix below shows recommendations for most cases, for both head mount display and mixed reality capture visual fidelity.

little depth variancehigh depth variance
fast renderingplanar warp, no secondary viewsdepth warp, secondary views
slow renderingplanar warp, no secondary viewsdepth warp, no secondary views

See the sections below for more information.

Reprojection Methods

Image reprojection can be done in any of these ways:

  • manually specifying planar reprojection
  • automatically estimating planar reprojection from the depth buffer
  • using a depth warp with pixel-wise reprojection from the depth buffer

Planar Reprojection

The planar reprojection is the default and is the fastest method in terms of GPU time. It works best when the scene’s objects have a similar depth. In other words, when the objects have a similar distance to the viewer.

When a scene has objects at different depths, we recommend setting the focus distance to the object the user is looking at. If you can't reliably determine what object the user is looking at, prioritize the stabilization of far content over near content.

This illustration shows three ways to set the focus distance.

The illustration above shows the scene and camera view from the top.

Left: For a single object, set the focus distance to that object.

Center: For multiple objects, set the focus distance to the object the user is looking at, if you know what it is.

Right: If you don't know what object the user is looking at, choose an average distance and prioritize far objects over near objects.

For these examples, the plane is orthogonal to the viewing direction at the focus distance. The focus distance is specified using the XR_ML_frame_end_info OpenXR extension. In the C-API, this is part of MLGraphicsFrameParamsEx.

In some cases, it can help to specify a tilted reprojection plane. For this, you'll need to use the XR_MSFT_composition_layer_reprojection OpenXR extension. Use the XR_REPROJECTION_MODE_PLANAR_MANUAL_MSFT option to provide the position and normalized normal of the plane.

The tilted horizontal line in the illustration below shows the reprojection plane at position p and the normal n.

This illustration shows a tilted reprojection plane at position p and the plane's normal n.
note

The plane should always be in front of the viewer.

Planar from Depth Buffer

In some cases, you might not be able to specify a good focus distance manually. In these cases, you can use the content of the depth buffer to automatically estimate the focus distance. To do this, use the XR_REPROJECTION_MODE_PLANAR_FROM_DEPTH_MSFT option of XR_MSFT_composition_layer_reprojection.

The processing of the depth buffer takes additional GPU time.

Depth Warp

Planar reprojection might not be optimal, especially for scenes with a lot of variance in the depth. When the app submits a depth buffer, the depth buffer can be used to warp the scene on a per-pixel level. This takes additional GPU time. This is scene dependent but currently adds about 2 ms to 2.4 ms.

You select this reprojection method using the XR_REPROJECTION_MODE_DEPTH_MSFT option of XR_MSFT_composition_layer_reprojection.

For the best visual stability, continue providing focus distance or the reprojection plane.

This illustration shows typical timing for planar warp and depth warp.

Even though depth warp takes longer, it can help to hide occasional frame drops for scenes with high depth complexity.

Notes About Reprojection Methods

  • If both XR_ML_frame_end_info and XR_MSFT_composition_layer_reprojection extension are specified, the values of XR_MSFT_composition_layer_reprojection have priority.

  • Your render engine might not write transparent objects into the depth buffer. For these objects, using the depth warp reprojection method might result in visible artifacts.

  • The XR_REPROJECTION_MODE_ORIENTATION_ONLY_MSFT option of XR_MSFT_composition_layer_reprojection is not supported.

Content Velocity

By default, the virtual content is stabilized with the assumption that the virtual objects are static in world space. The Magic Leap 2 device has a color sequential display, meaning it shows red, green, and blue color channels one after another. This can cause moving objects to appear blurred with color fringes.

When XR_MSFT_composition_layer_reprojection is used, the stabilization is performed relative to the projection layer’s space. For headlocked content, use the view space instead of the projection layer’s space.

With a velocity vector, the scene motion can be specified in meters per second with respect to the layer’s space. Scene motion can be used to improve sharpness and avoid the blurring artifacts.

This illustration shows a simulation of the perceived image of a moving white circle. The left side shows the default without specifying a velocity. The right side shows the appearance when specifying a velocity that matches the motion of the white circle.

For the whole scene, only one velocity vector can be specified and applied on the reprojection plane.

This works well when the whole scene moves in the same direction, has a similar depth and when the user is tracking the content with the eyes. An example is grabbing and moving an object. However, if there is other content which moves differently or is static, artifacts appear.

Recommendations:

  • Use this feature if all of the scene moves the same way and has a low depth complexity.
  • Consider using eye tracking to confirm which object the user is tracking with the eyes.
note

The XR_MSFT_composition_layer_reprojection extension is only specified for projection layers. Therefore, content velocity can't currently be specified for quad layers.

Reprojection for Capture

Even if virtual content is well aligned on display, it might be misaligned in capture. You can improve the alignment for capture by setting the focus distance, using depth warp, or enabling secondary views.

By default, the virtual content of the left eye's view is first warped to the position of the color camera and then composited on the camera image for the capture. The virtual content is warped to the position of the color camera. By default, a planar reprojection at the focus distance is used. This means that objects at different depths might appear misaligned.

When you set the focus distance using XR_ML_frame_end_info, the focus distance you specify is also used for capture. If you use XR_MSFT_composition_layer_reprojection, those values have priority over the focus distance.

If you use depth warp by specifying XR_REPROJECTION_MODE_DEPTH_MSFT, then during capture, the left eye’s virtual camera position is changed to the color camera position and the depth warp is applied to the display’s image for the left eye. This provides the best capture quality.

Secondary Views

A secondary view is a separate render pass for the capture. That means, in addition to the two views (one for each eye), there is another render pass from the position of the color camera. This results in a good quality capture but more rendering time. In that case, the amount of reprojection is typically very small. We recommend using a planar reprojection for the secondary view render pass. Using depth warp typically provides no benefit.

To enable a secondary view, use the XrSecondaryViewConfigurationLayerInfoMSFT option of the XR_MSFT_secondary_view_configuration OpenXR extension and enable the XR_MSFT_first_person_observer OpenXR extension.