Skip to main content
Version: 20 Mar 2024

Remote Rendering with OpenXR

Remote Rendering with OpenXR

Magic Leap 2 offers remote rendering capabilities through OpenXR. OpenXR helps address application compatibility issues across devices by exposing a set of standardized APIs that allow your application to run on any system that works with OpenXR. This guide provides an overview of common features you may want to use, and best practices for working with them through OpenXR. Topics covered include:

  • Depth
  • Environment Blending
  • Support for XR Controllers
note

Suggestions in this guide may be optional in the specification, but are helpful when you want to get optimal performance for remote rendering.

Depth

You can provide depth information to your application with XR_KHR_composition_layer_depth. XR_KHR_composition_layer_depth allows your application to submit depth information from its renderer to the OpenXR runtime.

Environment Blending

After the compositor blends and flattens all layers, including layers from the system, the image is presented to the Magic Leap 2 headset's display. When possible, use ALPHA_BLEND as the blend mode to submit frames, and set the alpha portion of the RGBA image as appropriate for your headset.

When setting alpha for your image, it is expected that the alpha is "Pre multiplied." An alpha value of 0 will result in no segmented dimming, but will not reduce the brightness of the RGB image, creating a blend of real and virtual. Stronger values will increase the strength of the segmented dimming as well as the opacity in camera capture, up until full strength is achieved. If you want to do simple dimming at full strength, set alpha to the maximum value.

You can read more about environment blend modes in section 10.4.6 Environment Blend Mode of the OpenXR Specification.

Support for XR Controllers

Magic Leap 2 has its own controller profile. However, you can also access it using the Simple Controller profile.

You can use the sample code to get started, it is straightforward to use. In general, you define explicit actions your user can do, for example 'Teleport.' Then you map what action maps to what button on the controller.

If you need the 3D position of the controller, it's done by creating and mapping an Action Space for it. Then you query the space (using the same methods as you would for any XrSpace) and obtain the controller's location.