Skip to main content
Version: 12 Dec 2024

Spatial Mapping & Localization

The Magic Leap 2 utilizes its three world cameras and depth camera to map and localize itself in the physical world. Device sensors scan the MagicLeap 2 user’s environment, process that information, and use it to create a digital representation of the physical world. The Magic Leap device uses depth, distinct features within the environment, clear planes like walls, ceilings, floors, and architectural occlusion to build a map of the real-world environment and then determine the device’s position and orientation within that environment (called ‘localization’). Apps can use this representation to place digital objects that may persist across sessions and occlude rendering.

Spatial maps can be leveraged by developers to know where to place digital content such that it appears to be anchored in physical space. Map data can be used to simulate physics-based interactions with the user’s physical environment, as well as digital character path planning and navigation. Map data can even be shared across multiple MagicLeap devices to enable multi-user co-located experiences where all users share the same virtual map of the same physical space.

Basic head tracking using the session map

The MagicLeap 2 device and platform were exclusively designed to enable 6-DoF experiences where virtual content appears to ‘stick’ and behave as though it were part of the user’s world. Even if an application does not require its content to directly interact with surfaces in the user’s physical space, it is still expected to render content relative to the user’s physical world. To enable this, the MagicLeap head tracker runs continuously, even if the device is not localized into an existing spatial map, called a Space.

6-DoF

The term '6-DoF' refers to the idea of having six "degrees of freedom". In a 3d space, an object can move up/down, left/right, and forward/backward. It can also rotate by 'pitching' up/down, 'yawing' left/right, and 'rolling' around the direction it is facing. Combined, these six possible transformations make up '6 degrees' of freedom of movement.

MagicLeap 2 continuously tracks the location of the user's head relative to their environment. The user is free to move around their space and the tracker will produce 6-DoF spatial poses that reflect the current position and rotation of their head. This allows applications to reason about the location of their content relative to the physical environment the user is in, even as the ML2 display moves continuously as it is rigidly attached to the user's head.

Each time the ML2 is powered on, a temporary spatial map is automatically created so the ML2 can recognize its 6-DoF position within the area. This temporary spatial map, called a session map, exists until the device is powered down. The session map, itself, is not directly exposed to developers by platform apis. Rather, it exists only to enable applications to reason about the location of the MagicLeap device relative to a frame of reference that appears to be visually stable relative to their physical environment.

Note that the origin of the session map will be determined by the physical location of the ML2 when it was powered on. However, applications that do not make use of Spaces should still treat the origin as a stable, but arbitrary, location in their environment. Even if the origin was, at some point in time, in a location that was visible to the user, the user may move or turn away from it prior to launching an application. Additionally, in cases where head tracking is lost (eg- if the user enters a dark room) and is not able to recover, a new session map will eventually be created and the origin will change to a new location. Rather, such applications should render content relative either to the current location of the MagicLeap device or to tracked spatial input sources like the ML2 controller or the user’s hands.

Spaces

It is also possible to create a map of an environment that can be persisted across usage sessions. Persisted maps are referred to as ‘Spaces’. When a Space is created, a 3d mesh of the user’s environment is created that can be used to occlude and simulate physical interactions with virtual content. Spaces include the map data required to recall a consistent origin for the map. Other frames of reference of interest, called ‘Spatial Anchors’, can be persisted in the Space as well.

A user must create and map a Space using the Spaces application, included in the MagicLeap operating system, before it can be used by applications. Spaces can not be created programmatically. If an application requires the user to be localized into a Space, it may launch the Spaces application using an Android Intent.

When a user opens the Spaces application, they will be presented with the option to create either a local or a shared space. Local spaces are stored on the user’s physical MagicLeap device. Local Spaces are limited to a single scan of around 250 m2 in total area. No network connection is required to create and localize into a local space at runtime. Creating and localizing into shared spaces requires a constant connection to an AR Cloud server. Shared spaces can be much larger (up to 10,000 m2) and are automatically made available to all devices connected to the same server. See AR Cloud for more details. Local spaces are not automatically shared with other ML2 users. However, they can be imported and exported such that a Space created on one device can be exported and then imported for use on another ML2. See Import and Export Spaces for more details.

When a new Space is created, it must first be scanned. The user will be asked to traverse their Space, using visual cues provided by the Spaces app, to construct the map. After a new Space has been created, the scanned mesh is stored on the device and can be exported as a static mesh for use in other applications.

Note- the exported mesh does not dynamically update at runtime and does not directly interact with the device's Meshing and Plane Finding subsystems.

As the Magic Leap 2 scans the physical world, it records key views and feature points. Key views are 6-DoF camera positions that are created during the scanning/mapping process. Feature points are distinctive features found in the physical environment such as wall or floor textures, corners of doorways or tables, etc. Feature points are then converted into a 3D point cloud. The more key views created and the more features seen by each Key View, the higher the quality of the map that will be constructed and the more likely that localization will remain stable as users traverse their environment at runtime.

Localization

Localization is the process by which the MagicLeap 2 determines where it is within the bounds of a Space. The device may only be localized into one Space at a time. When the MagicLeap device is localized into a Space, applications can render content relative to a consistent world origin location or Spatial Anchors. Note that when localized into a Space, the origin of standard OpenXR reference spaces (LOCAL, STAGE, LOCAL_FLOOR) will still be located in an arbitrary location independent of the Space’s map. Applications can make use of the LOCALIZATION_MAP reference space in order to render content relative to the origin of the Space the device is currently localized into. Once localized, the device will attempt to relocalize itself every 10 seconds to correct for drift.

ML2 users can choose a Space to localize into using the Spaces application included in the MagicLeap OS. Applications can also make use of space management apis to query for available Spaces and request localization into one of them programmatically. Note that localization is global. Once the device has been localized into a particular Space, it will remain so as the user continues to use the device and start different applications. If the device was localized into a Space when it was shutdown, it will attempt to relocalize into the same Space on boot.

Localization Troubleshooting

Localization can fail at runtime. In many cases, the user merely needs to turn around and stand up. Several environmental factors may contribute to localization failures-

  • The room may be too dark.
  • There may not be enough unique material or architectural features within view. (eg- if a user is looking at a blank, glass or glossy, or solid-colored wall with very few contrasting elements)
  • If the user is not looking around, the ML device cannot distinguish enough unique features to understand where in the Space the user is located.
  • If the user is moving their head too fast.
  • If the user is looking at a moving crowd such as those found in convention centers

Note that a failure to localize into a Space is distinct from head tracking loss. If a Space localization update fails, then the pose of the map origin and any spatial anchors will not be updated to correct for drift on that frame. However, head tracking will continue to run and the device may still be able to localize itself in the session map as described above (eg- if the user walks outside of the bounds of the Space’s map). When head tracking loss occurs, the MagicLeap OS will disrupt the user’s experience and display a tracking loss message as there is no way an application could know how to render content relative to the user’s environment. When a localization update fails, in many cases, it may not be immediately apparent to a user that an issue has occurred if localization previously succeeded. Applications may choose how they wish to handle localization update failures.

Spatial Anchors

Spatial anchors are frames of reference of interest that may be persisted in a Space. All content that is meant to appear as though it is physically anchored in the user’s environment should be rendered relative to a spatial anchor. Rendering such content relative to the map origin is not recommended. Shared maps (created using AR Cloud) can be updated and expanded at runtime. When shared maps change, the pose of any anchors persisted in the map will be updated such that they will still be located in the same physical location as they were prior to the update.

When the device is localized into a Space, an application may create a spatial anchor anywhere within the bounds of its map. When the pose of the map origin is updated to correct for drift, the pose of all spatial anchors will be updated accordingly. When localization updates occur at runtime, the origin and all anchors will be posed such that content close to the ML2 device appears as visually stable as possible. For this reason, it is recommended that applications create anchors close to the ML2. See Spatial Anchors for more details.