Skip to main content
Version: 20 Jan 2025

Content Placement Strategies

The Magic Leap 2 has a variety of sensors and tracking capabilities that enable the creation of experiences where virtual content appears to stick and behave as though it were part of the user’s physical environment. As a developer of XR applications, you have the challenging task of leveraging all of the myriad features and technologies exposed by the platform to build experiences that work as well as possible on the hardware given your particular use case. Incorporating the user’s physical environment into your application is both a technical and a design challenge that is unique to developing for XR platforms where the user is able to see their physical surroundings. You’ll need to understand what technologies are available, and the tradeoffs between them, to effectively design experiences that blend virtual objects with physical spaces as seamlessly as possible.

Considerations

Just as you would when designing any system, you’ll need to identify the requirements of your application to make decisions about what features to leverage. As mentioned above, XR presents a new medium for experience design where you’ll need to think about how users and your content will interact with physical environments. This is a non-trivial concern and it's a novel one for developers accustomed to more traditional platforms where all content is rendered on a 2D display. You don’t have control over the physical environment where users will interact with your application. It may also change at any time as users remain free to move about their space. In many cases, ‘how to place virtual objects in space’ ends up being the most challenging problem application designers must consider when building for an optical see-through XR platform like Magic Leap.

Before choosing which features to implement, consider the requirements of your application. For example:

  • What sort of environment are users in? How large is the space? How far can people move around?
  • What kinds of surfaces are in the space? What are the lighting conditions?
  • Does the application need to know anything about sections of the user’s environment that are too far away to observe when the app starts? If so, does your application need to persist and recall information about the user’s environment?
  • How does your virtual content behave?
  • Where is it located relative to the user?
  • Are your virtual objects stationary with respect to the physical world or moving around? Are they in constant motion or do they ever stop? Are they attached or tethered to something? (for example- the user’s head, hands, or controller)
  • What happens to your content if the user transitions into a new environment? (perhaps by walking into a new room, or if tracking is lost and recovered in an unfamiliar part of the same space)
  • How do virtual objects react to physical surfaces in the user’s environment?

While these considerations may seem daunting, Magic Leap 2 provides APIs, libraries, and tools that help XR applications understand and react to a user’s environment, including spatial mapping, marker tracking, and spatial anchoring. By leveraging these capabilities, you can build experiences that adapt to changing surroundings, just as a web developer might design web pages that respond to changes in window size or device form factor.

6-DoF Head-tracking (Headpose)

The Magic Leap 2 was specifically designed to enable ‘6-DoF’ experiences where content appears to be placed in the user’s physical environment. A 6-DoF head-tracking algorithm continuously tracks the location of the user’s head relative to the physical environment around them and enables virtual content to appear grounded in physical spaces as large as a warehouse (~1000m²). Head-tracking is available and is expected to be leveraged by all immersive applications on Magic Leap.

The Magic Leap head-tracking algorithm continuously builds a temporary map as the user navigates their environment. The details of the temporary map are not directly exposed to developers. Rather, as long as the algorithm is able to successfully track the user’s head in their immediate environment, as a Magic Leap developer, you will have access to visually stable, world-relative frames of reference to use to render content. Within OpenXR-based applications, these world-relative frames are indirectly accessible through standard reference spaces such as LOCAL and STAGE.

See the headpose feature guide for more information about head tracking.

6-DoF

The term '6-DoF' refers to the idea of having six "degrees of freedom". In 3D space, an object can move up/down, left/right, and forward/backward. It can also rotate by 'pitching' up/down, 'yawing' left/right, and 'rolling' around the direction it is facing. Combined, these six possible transformations make up “six degrees” of freedom of movement.

Frame of Reference

In this context, a frame of reference is a 3D coordinate system representing some meaningful location in physical space. Content should almost never be rendered in view space—that is, relative to the pose of the ML2 and its displays, which are rigidly attached to the user’s head. A user’s head should be assumed to be in constant motion, even when seated. Rendering rigidly “head-locked” content will appear unnatural to users and could easily make them feel nauseous. Instead, applications should render content relative to frames of reference that represent the location of objects and surfaces in the user’s physical environment. These tracked frames of reference are similar to the concept of a common “world space” that would be familiar to 3D game developers for traditional platforms. The key difference on a platform like ML2 is that, unlike an abstract ‘world space’ in a virtual game world, where the developer is free to place the origin wherever they wish, users will perceive content rendered on ML2 as blending and interacting with their real physical surroundings. This means that, as a developer, you will want to select reference frames that make sense in whatever physical environment your users are in. A real XR application will also likely make use of multiple tracked frames of reference and may need to consider how content transitions between them.

Limitations

A key limitation of basic head-tracking is that its origin, while physically world-locked and gravity-aligned, is effectively an arbitrary location in the environment. This origin may change if head-tracking is lost for an extended period of time or if tracking is regained in an unfamiliar location.

  • Tracking Loss and Regain: If head-tracking is lost for an extended period—such as when the headset is removed or enters sleep mode—the Magic Leap 2 attempts to restore the previous origin upon resuming tracking. This restoration relies on the temporary map of the environment. If the system recognizes the surroundings, it can often reorient to the prior origin. However, in unfamiliar locations or novel views (areas not previously tracked), the device may start a new headpose session with a different origin or restore an inaccurate one.

  • Low Tracking Quality: In environments with suboptimal tracking conditions—such as poor lighting or insufficient environmental features—the system may temporarily lose precision in maintaining the origin. Users may not perceive significant changes during brief periods of degraded tracking if tracking conditions improve. However, subtle inaccuracies can accumulate until optimal conditions are restored, at which point the origin generally stabilizes without noticeable jumps. Developers should design applications to account for these temporary variations, particularly when placing persistent content.

Relying only on head tracking may be appropriate for content that does not require persistent spatial information—such as placing content relative to the current location of tracked inputs (e.g., head, hands, or controller). When there is no need to maintain persistent, real-world-relative positions across tracking interruptions or sessions, this fundamental tracking feature can simplify your application’s design.

Example – Placing Content Directly in Front of the User

Often relying on head-tracking is sufficient. Consider an application that positions a floating panel in front of the user. For example, the Magic Leap OS home menu achieves this without requiring persistent knowledge of the environment’s features or surfaces. You can lock the object relative to the stable, world-relative reference frame rather than locking it to the user’s head. By doing so, the user perceives the object as “suspended” in mid-air, maintaining its position as they move around. When creating the object, you can use the user’s current head pose to determine its initial placement.

note

The tracked origin of an OpenXR LOCAL reference space handle aligns its origin with the position of the device at the time an XrInstance is created. While this might seem convenient for placing objects at startup, keep in mind that the LOCAL origin can still shift if tracking is lost and regained. Additionally, the user can move freely around the environment. Consequently, using the device’s current head pose at the time of object creation is generally a more robust approach than relying solely on the LOCAL space origin.

Naturally, after the initial placement of the object, the user remains free to move about their space. You’ll want to provide a means to reposition the object so the user can interact with it after they’ve moved. A simple solution is to provide the ability to reposition the object when the user presses a button on the controller. Another option is to implement a spring tether behavior between the user’s current head location and the object such that it will follow the user around like a balloon or a kite with a string that has been tied to the user. Implementing and tuning a spring tether behavior can be difficult, but the Magic Leap Unity examples project and MRTK contain sample behaviors that you can use for this purpose. MRTK Solvers documentation

While a number of scenarios may be satisfied by basic head-tracking, there are clearly limits to the scenarios that can be addressed when the only visually stable frame of reference that you have available to you is placed at an arbitrary location in space. Many applications will need to be able to establish stationary frames of reference in a user’s environment that can be used to anchor content to meaningful locations in their space and persist across usage sessions. The two main Magic Leap features provided to enable developers and users to create stationary reference frames are marker tracking and Spaces/spatial anchors.

Marker Tracking

Magic Leap can continuously track the pose of fiducial markers in the user’s environment. Each marker can serve as both a storage medium for encoded data (e.g., an ID) and as an independently tracked frame of reference. This makes marker tracking a practical method for placing virtual content in physical space in various scenarios. To establish a tracked frame of reference, users can simply print a fiducial marker and place it at the physical location where they want virtual content to appear —once recognized by the ML2, that location becomes a tracked frame of reference for your content.

Although marker tracking has certain limitations, developers can extend it to more sophisticated applications by arranging groups of markers strategically.

Advantages

  • Markers are tracked continuously and can be move over time.
  • Users can accurately position markers in the desired location with minimal effort.
  • Marker locations do not need to be explicitly stored by the application for recall in subsequent sessions. The marker’s identification data and its position are inherently encoded in the physical image and its placement within the environment.
  • Multiple markers can be tracked simultaneously and detected in the same frame.
  • Markers are tracked independently of one another. Losing track of one marker does not impact the system’s ability to detect and track additional markers.

Tradeoffs

  • Markers must be clearly visible to the Magic Leap’s center RGB or World cameras in order to be detected. Each time the application starts, users must bring the markers into view before tracking can begin.
  • Marker poses are computed relative to the headpose. If head tracking is lost and cannot be restored in the same environment map, previously detected markers become invalid and need to be re-detected.
  • Users must have the ability to place markers wherever virtual content is desired, which may not always be feasible.
  • Limitation in Marker Tracking accuracy. Applications that use a single marker for tracking may notice significant offset due to the "lever arm" effect. If content is located far away from the marker, any inaccuracy in the tracked orientation of the marker will be greatly magnified by the distance between the marker and the object.
Lever Arm

Lever arm effects refer to the idea that if an object is rigidly attached to a frame of reference, any inaccuracy in the orientation of that frame of reference will result in an increase in placement error that is proportional to the distance between the object and the origin of the reference frame.

See the marker tracking feature guide for more about the marker tracker and tips on how to improve marker detection.

Example – Placing Content at the Origin of a Single, Stationary Marker

The marker tracker runs continuously. However, marker poses may not be updated every frame and are not perfectly accurate.

To avoid visible jitter and maintain a stable user experience, it is generally advisable not to update content every time the system detects the stationary marker. Instead, once a marker is initially detected, record the pose and continue to render the corresponding content at that fixed location unless a significant event occurs (e.g., head tracking loss or an application restart).

tip

For increased stability, consider capturing multiple pose samples when a marker is first detected. You can then filter and average these samples to compute a more reliable initial pose.

Since content is rendered relative to the headpose origin, users will perceive it as visually stable, even when the content is no longer being updated to match the marker’s detected pose. To improve detection accuracy You can implement pose smoothing algorithms—or tune the tracker’s fps hint and analysis interval in your marker tracker profile —to strike a balance between reactivity and visual stability.

If you choose to continuously update content based on new marker poses, be aware that without additional smoothing, the result could appear jittery to users.

Example – Using an Array of Markers to Improve Stability

Because markers need to be clearly visible to the ML2’s RGB or World cameras and inherently provide only an estimated pose, using multiple marker can significantly increase reliability. Magic Leap supports tracking up to 16 markers at once, each located and recognized independently within a single frame.

To leverage multiple markers, you might:

  • Arrange markers in a known pattern or grid where each marker is placed at a measured offset from the others. Then observe the multiple markers simultaneously and combine their detected poses to form a single, more stable frame of reference

  • Compute an aggregate pose by averaging or otherwise combining the detected poses, mitigating individual marker inaccuracies

  • Handle occlusions and partial visibility more gracefully, since only a subset of markers needs to be in view for the system to maintain a stable frame of reference.

These multi-marker strategies can help reduce jitter, reduce offset and increase stability in larger spaces.

Example - Using Three Markers to Improve Orientation Accuracy

Even when a single marker’s pose is correctly detected, small inaccuracies in marker placement or orientation can result in noticeable alignment errors—particularly for large virtual objects or scenes (the “lever arm” effect). To mitigate these rotational and positional errors, you can use 3 markers to establish a more precise frame of reference.

Three Marker Stratagies:

  • Ignore individual rotations: Rotational error can largely be mitigated through the use of multiple markers. As an example, you could ignore the rotation of detected markers and, instead, use the positions of multiple markers to disambiguate the orientation of the frame of reference that you are trying to create.

  • Create a three-axis basis: Place the first marker where you want your origin. Next, using an imaginary line (e.g., along positive Z) position a second marker at the same height along that line. Then do the same along a second imaginary line perpendicular to the first and place a third marker. When the application is run, instruct the user to look at each of the three markers and store the pose of each one. Once all three markers have been detected, you can use them to construct two vectors from the origin (position of the first maker) to the second and third markers. Then you can compute the third axis to form a complete orthogonal basis by taking the cross-product of the two vectors.

Example – Using Multiple Markers to Align a Room-Scale Digital Twin

A compelling use case for Magic Leap XR experiences is the ability to align a large-scale digital twin—such as a vehicle, machine, or architectural model—with its real-world counterpart. This alignment allows designers to visualize changes or new configurations, or to offer training scenarios where virtual elements are overlaid on the physical object.

Approach:

  1. Identify Key Points: Pinpoint a set of corresponding locations on both the physical object and its virtual model.
  2. Place Fiducial Markers: At each physical key point, place a fiducial marker.
  3. Map to the Virtual Model: In your application, associate each marker’s position with the same key point in the virtual model’s coordinate space.
  4. Compute the Best Fit: Once the markers are detected, compute a transformation that minimizes the distance between real-world marker locations and their corresponding virtual points.
  5. Render the Model: Apply this transformation to the virtual model so it precisely overlaps the real object when viewed through the headset.

This technique generalizes the three-marker approach by allowing for many more than three markers, further improving robustness against occlusion and tracking noise. For an implementation reference, see Magic Leap’s ObjectAlignment project on GitHub. Although this example uses manual user input rather than automatic marker detection, the principle remains the same: define corresponding points in physical and virtual spaces, then solve for the alignment transform.

Spaces

Magic Leap's Spaces application, included in the Magic Leap OS, allows users to create and store spatial maps of their environments. These maps, referred to as Spaces, enable the Magic Leap 2 (ML2) device to localize itself within a specific physical location. By localizing into a Space, the ML2 can recall spatial anchors and the map's origin across multiple sessions, ensuring persistent placement of virtual content in the real world.

Localization

Localization is the process by which the ML2 uses its 6-DoF head tracking and onboard sensors to correlate its current position with a previously created Space map. As you move around, the ML2 continuously aligns its pose with the stored map, ensuring virtual content stays anchored in the correct physical locations. The ML2 can only localize into a one Space at a time.

Once localized, the Space’s map origin becomes available to applications as a stable, world-relative reference frame. This origin forms the baseline for placing virtual objects and interacting with the environment.

Automatically Localizing into a Space

After you create or select a Space in the Spaces application, the ML2 will attempt to localize into that same Space when it reboots or powers on. It will keep trying until you explicitly select a different Space via the Spaces application or the Localization Map API.

Localization and Spatial Anchors

When the ML2 is “localized” into a Space, applications can create and manage spatial anchors. A spatial anchor functions like an invisible fiducial marker—no physical marker is required. Instead, anchors are placed programmatically at any location of interest (e.g., on a table or next to a wall) and stored in the Space’s map.

  1. Create an Anchor: An application calls an API to place an anchor at a specific coordinate in the current Space, receiving a globally unique identifier (GUID).
  2. Persist the Anchor: The anchor is saved within the Space’s map and remains there across sessions.
  3. Recall Anchors: When the user returns to the same environment and the ML2 localizes to the Space, any application with the Spatial Anchors permission can query the map for existing anchors and reposition virtual objects at their correct physical locations.

By leveraging localization and spatial anchors, developers can deliver persistent and reliable XR experiences where virtual items remain in place over multiple sessions.

See the spatial mapping and localization feature guide for more information about Spaces.

Creating and Sharing Spaces

After a Space has been created (e.g., by scanning a room with the Spaces app), the corresponding map is persistently stored on the ML2 device. Spaces can be shared with other devices or users by:

  1. Manual Import/Export: Transfer the spatial map data using ADB then the Localization Map API to load the Space file.
  2. Localization Map API: Manage Spaces and their data programmatically within your application, enabling features like map upload/download or synchronization across multiple devices .
caution

After a Space is created, it cannot be updated; if the physical environment changes significantly, the device may fail to localize. In such cases, the user should create a new Space that reflects the updated environment.

Example: Persisting and Recalling a Virtual Object

When the Magic Leap 2 is localized into a Space, applications can create, retrieve, and delete spatial anchors within the bounds of that Space’s map. This functionality allows developers to create persistent virtual content tied to physical locations.

  1. Map Creation: Before using spatial anchors, the user must create a Space by scanning the room with the Spaces app. This generates a map of the environment, which can be reloaded during future sessions.
  2. Anchor Placement: Within your application, the user moves the Magic Leap controller to a physical spot (e.g., a table) where they want to place a virtual decoration. By pressing the trigger, the application creates a spatial anchor at the controller’s current position.
  3. Anchor Storage: The application assigns a globally unique identifier (GUID) to the anchor and stores a mapping between the GUID and the decoration type (e.g., “Red Vase”). The anchor’s data is saved in the Space’s map.
  4. Recall Across Sessions: When the application is restarted, the Magic Leap 2 localizes into the same Space. The app queries for anchors within a predefined radius around the user’s current location. When it finds a matching GUID, the app retrieves the anchor and renders the corresponding decoration in the same physical location as before.

This ensures that when the user returns to the same environment, any anchored content reappears exactly where they left it.

Example - Ensuring that a User Has Localized into a Space

If your application depends on localization to render virtual content accurately, it’s essential to confirm that the user has successfully localized into a Space before spatial anchors are created or retrieved.

  1. Localization Check at Startup: When the application launches, verify whether the ML2 is localized. If it’s not, inform the user with an appropriate message (e.g., “Please localize into a Space to begin.”).
  2. Querying Available Spaces: Use the Space management APIs to retrieve a list of Spaces stored on the device. If your application requires a specific Space, attempt to localize into it programmatically.
  3. User Selection: If the correct Space isn’t known ahead of time, provide the user with a list of available Spaces and allow them to select one.
  4. Launch Spaces App as Fallback: If the user needs to create or you want them to manually select a Space, you can launch the Spaces app from your application using an intent.

This flow ensures that localization is handled before attempting to interact with spatial anchors, preventing errors and maintaining a seamless user experience.

Static Space Mesh

When you create a local Space, Magic Leap 2 also generates a static triangle mesh of the scanned environment and saves it as a GLB file in the Spaces app’s data directory. By default, this file is not directly accessible to applications.

  • Export/Access the Mesh
    Users can export the mesh through the Spaces app, then share or store it wherever they choose. Your application could also potentially request the file via a file picker or implement Android Storage APIs to access it the Space file and extract the GLB file.

  • Preloading the Mesh
    If you know your application will always run in the same environment (e.g., a museum installation), you can create the Space in advance, export the mesh, and bundle it within your application’s data. This approach saves time by eliminating the need to generate a mesh at runtime.

  • Updating the Mesh
    Any changes to the physical environment will require rescanning and recreating the Space. If you bake the mesh into your application, you must recompile and redeploy the app whenever the environment changes.

Considerations for Developers

When designing applications that use Spaces, keep the following in mind:

Ensuring Localization

  • If localization is required for your application to function properly, ensure the user has localized into a Space at the start of the session.
  • Use the Localization Map API to list available Spaces or prompt the user to select one.
  • The Magic Leap 2 can localize into only one Space at a time. Applications must ensure the correct Space is selected before relying on anchors or the map origin.
  • Consider launching the Spaces application via an intent if manual selection is needed.

Anchoring Best Practices

  • Anchors are ideal for stationary frames of reference. While anchors define fixed reference points in a mapped environment, virtual content rendered relative to an anchor does not need to be stationary.
  • If a virtual object moves within the environment but is not attached to the user, rendering it relative to a nearby anchor can ensure its movement remains consistent with the user’s surroundings.
  • For content that follows the user (e.g., HUD elements or floating UI panels), prefer head- or controller-relative positioning instead of spatial anchors.
  • Ensure anchors are placed in stable locations with good feature visibility to improve localization accuracy.
  • Always handle cases where localization fails or the Space’s map is unavailable.

Static vs. Dynamic Environments

  • Spaces work best for static/unchanging environments (e.g., exhibitions), pre-scan the space and distribute the map with the application.
  • Once a Space is created it cannot be updated. The device may fail to localize in the enviornment significantly changed after it was mapped.

Surface Understanding

Magic Leap’s tracking and localization systems—such as head tracking, marker tracking, and Spaces—are excellent for creating visually stable reference frames. However, these systems don’t inherently detect or interpret the physical surfaces in a user’s environment. For example, a spatial anchor might be placed behind a real-world object or inside a wall, creating unrealistic or inaccessible content placement. To ensure accurate and practical placement of virtual objects, developers can leverage plane detection and world meshing to identify and interact with real-world surfaces.

Plane Detection

Plane detection is a feature that identifies flat surfaces in the environment and categorizes them as floors, ceilings, or walls. These labels allow applications to make decisions about where and how to place virtual content based on the context of the detected surface. For example, developers can:

  • Snap objects to flat surfaces (e.g., place a chair on a detected floor).
  • Determine the purpose of a surface using its semantic label (e.g., differentiate a wall from a ceiling).
  • Implement interactions based on specific surfaces (e.g., attach a light fixture to a ceiling).

See the plane detection feature guide for more information about plane finding.

Example – Placing a Virtual Painting on a Wall

Consider an app that allows users to decorate their environment with virtual paintings. Instead of requiring users to physically interact with walls, the app can streamline the process using plane detection:

  1. Detect Planes: Activate plane detection to identify walls within the user’s current environment.
  2. Raycast from the Controller: When the user points the ML2 controller and selects a position, cast a ray to find the intersection point with the detected wall plane.
  3. Position the Painting: Place the virtual painting at the intersection, aligned to the wall’s surface normal.
  4. Anchor the Painting: Create a spatial anchor at the painting’s location, ensuring it stays fixed in the correct position even if the user exits and reopens the app.

This method allows users to position paintings at a distance while ensuring that the content appears naturally integrated into the real-world environment.

World Meshing

Magic Leap provides APIs for world meshing, which generates a dynamic 3D triangle mesh of the user’s environment in real time. This mesh updates continuously within a 10-meter radius, providing a detailed representation of surfaces, edges, and objects. Unlike plane detection, which focuses on flat surfaces, world meshing captures the shape and structure of the entire environment, including irregular surfaces and complex objects.

See the meshing feature guide for more information about world meshing.

Example - Collision detection

Applications can use the dynamic mesh for collision detection in physical simulations, allowing virtual objects to collide realistically with real-world surfaces. For example, a virtual ball can bounce off walls or roll on the floor detected by the mesh. However, it’s important to note:

  • Mesh generation is not instantaneous. Fast-moving physical objects might not be captured in time for accurate collisions.
  • Applications often render the mesh visually to show users which surfaces have been detected, providing feedback about the system’s understanding of the space.

Example - Live surface occlusion

The mesh generated by the world meshing algorithm can be used for live surface occlusion. An application could elect to cull virtual objects that are behind the mesh, relative to the current location of the ML2.

Recent versions of the Magic Leap OS include apis that allow applications to implement environment occlusion with surface meshes using a single API call. Applications must submit depth information in order for these single-call physical world occlusion apis to work. Note that while much simpler to implement for application developers (assuming depth information is submitted), there may still be scenarios where you would still want to directly leverage the world mesh apis for occlusion. Because the occlusion apis work at a platform level, the implementation can only act on the final frames that are submitted by applications. As an application developer, if you were to use the world mesh apis to implement occlusion yourself, you could, for example, line of sight cull content that is occluded by the mesh.

Static mesh produced when a Space is created

As mentioned above, creating a Space does result in the creation of a static triangle mesh of the mapped environment. However, the mesh is not normally available to application developers. However, the potential use cases for leveraging the static mesh produced when a Space is created are similar to what you might consider leveraging the dynamic world meshing apis for in your application. If you are designing an application for a controlled environment where you could access the mesh, you might consider using it as, because the entire mesh was generated ahead of time, you would not need to wait for the mesh to be generated at runtime using the world mesh apis.

Cameras and Third-Party Middleware

Magic Leap 2 provides access to the raw video streams from all of its pixel sensors, including three fisheye IR tracking cameras, the central RGB camera, and the ToF depth sensor to applications and middleware. Several third-party middleware solutions exist that leverage the camera streams to implement their own computer vision algorithms to handle scenarios that may not be addressed by the trackers described in this article.

Learn about additional middleware solutions built on top of Magic Leap 2's features: See Third-Party Resources.