Content Placement Strategies
The Magic Leap 2 has a variety of sensors and tracking capabilities that enable the creation of experiences where virtual content appears to stick and behave as though it were part of the user’s physical environment. As a developer of XR applications, you have the challenging task of leveraging all of the myriad features and technologies exposed by the platform to build experiences that work as well as possible on the hardware given your particular use case. Incorporating the user’s physical environment into your application is both a technical and a design challenge that is unique to developing for XR platforms where the user is able to see their physical surroundings. You’ll need to understand what technologies are available, and the tradeoffs between them, to effectively design experiences that blend virtual objects with physical spaces as seamlessly as possible.
Import and Export Spaces
Learn how to import and export maps between devices using ADB.
Marker Tracker API Overview
This section provides an overview of the marker tracker and references to create a custom Marker Tracking script.
Marker Tracker Events Example
The Magic Leap 2's marker tracker API is light weight and can be extended depending on your application's needs. This section provides an example of extending the API to broadcast an event when a marker is found, lost or updated based on the amount of time that has passed since the marker was update.
Marker Tracker Example
This section includes an example of detecting Fiducial Markers on the Magic Leap 2 headset.
Marker Understanding API Overview
This section provides an overview and API references for the the Magic Leap 2 Marker Understanding OpenXR Feature.
Marker Understanding Example
This section includes an example of detecting Fiducial Markers on the Magic Leap 2 headset.
Meshing
Meshing is the creation of triangle-based meshes from the World Reconstruction model created by Magic Leap devices. The mesh is used for real-time occlusion rendering and for collision detection with digital content.
Meshing
Learn how to create meshes from surfaces detected by the Magic Leap.
Meshing Query Location Example
Learn how to adjust the mesh queries location and bounds.
Meshing Subsystem Component Overview
Learn how the Meshing Subsystem component works and it's features.
Meshing Visualizer Example
Learn to adjust the appearance of the mesh generated by the Magic Leap 2.
Plane Classification
Learn how the Magic Leap classifies surfaces.
Plane Detection
Plane Extraction enables users to extract rectangular planar regions from the world reconstruction model. Plane candidates are returned as simple geometric rectangles.
Plane Detection
Learn how to detect surfaces using the Magic Leap SDK.
Query Planes
Learn how to query planes Magic Leap that were detected by the Magic Leap.
Real-time World Sensing
Magic Leap 2's Meshing and Plane Finding APIs provide facilities for applications to understand the real-world surfaces around the user in real-time using the time of flight depth sensor on the device. Apps can use this representation to occlude rendering, place digital objects, and for physics-based interactions or ray-casting. The depth sensor has the resolution of 544 x 480 and a field of view of 75° (h) x 70°.
Spaces Application
Learn how to use the Spaces app to localize into and create spatial maps.
Spatial Anchors
Learn about Spatial anchors and how they can be used inside your application.
Spatial Anchors
Learn how to link objects to Spatial Anchors and use them to create persistent content.
Spatial Anchors API
Learn the core API calls required to implement Magic Leap 2's Spatial Anchors API.
Spatial Anchors Callbacks
Learn how to get notified when spatial anchors are created, added or removed.
Spatial Anchors Examples
Contains code that can be used as a reference or demo Magic Leap 2's Spatial Anchors functionality.
Spatial Anchors Overview
Learn how to link objects to Spatial anchors and use them to create persistent content.
Spatial Mapping & Localization
The Magic Leap 2 utilizes its three world cameras and depth camera to map and localize itself in the physical world. Device sensors scan the MagicLeap 2 user’s environment, process that information, and use it to create a digital representation of the physical world. The Magic Leap device uses depth, distinct features within the environment, clear planes like walls, ceilings, floors, and architectural occlusion to build a map of the real-world environment and then determine the device’s position and orientation within that environment (called ‘localization’). Apps can use this representation to place digital objects that may persist across sessions and occlude rendering.