Secondary View Configuration
Previously, you needed to specify per frame focus distance within your application to facilitate accurate video captures on the Magic Leap 2 (ML2). This can be tricky whenever the user needs to focus on objects at different distances and can cause video captures to have offsets between real world and virtual object alignment. With the Secondary Views extension, there's now support for client apps to provide a dedicated ML2 RGB camera pass to overcome this issue.
With Secondary Views, you can procure high quality video captures with accurate physical and virtual pixel alignment on your ML2 device. Key changes you can expect:
- No more cropping artifacts when recording MR capture in 4:3 aspect ratios.
- No more offset alignment when recording MR capture of hand tracking, controller tracking or marker tracking related applications.
- MR capture is no longer sensitive to focus distance when it comes to pixel registration. Although focus distance best stabilization practices are still recommended.
How to Enable Secondary Views (Native Level Support)
At the native level (OpenXR and C/C++) secondary views is achieved by requesting for both XR_MSFT_secondary_view_configuration
and XR_MSFT_first_person_observer
extensions during OpenXR application setup. The application is expected to query for secondary views state. Only when the application detects that secondary views are active, the application is expected to request a new swapchain with the recommended resolution. After that, the application is expected to submit secondary views in addition to primary views on every end frame call. However, when the secondary views state changes to inactive, the application is expected to go back to the default behavior of only submitting primary views on end frame call. SDK OpenXR samples now support Secondary Views and the modifications made to XRAF can be used as a reference implementation for apps to support the feature.
Cost
Secondary Views is expected to incur a cost mostly on application's graphics processing unit (GPU) frame budget. Cost is proportional to frame scene complexity as it requires a third draw call, this time from the point of view of the headset's RGB camera.
For applications already close to maxing out the recommended application frame GPU budget, the recommendation is to temporarily reduce the surface scale via Viewport setup of the primary views while capture is active. This offsets the GPU overhead from extra draw call at the expense of image sharpness.