Skip to main content
Version: 20 Mar 2024

AR Cloud Virtual Machine Image Deployment (OVA, UTM)

The provided image contains all the necessary infrastructure and services pre-configured to be able to manage and work with the Magic Leap devices. This allows to set up a Virtual Machine (VM) quickly and access the services without a complex deployment process. For this to work certain, compromises had to be taken:

  • The AR Cloud bundle needs to be installed when running the virtual machine for the first time
  • High-availability for the services is disabled to limit the required resources
  • The observability stack is not installed

The above limitations can be overcome by reconfiguring the infrastructure and services, but it requires additional steps to be taken.


Taking all the above into consideration, the provided image is not suitable for scalable and fault-tolerant deployments in production environments! Instead, it is a means of quickly testing the services and devices.


The images are available on the Magic Leap 2 Developer Portal.

Authentication and SLA

You must be logged in to the Developer Portal for these links to appear. You can log in by clicking the "person" icon in the upper-right corner of the window at the link above.

To download an image the approval of the Software License Agreement is required.

Download the latest version of an image for the runtime environment of your choice. The OVA image supports the majority of the environments, except for MacBooks with Apple Silicon chipsets, in which case the UTM image should be used.

Verifying signatures


To verify a signed artifact, first install Cosign

The general verification command of the downloaded image using its signature is as follows:

cosign verify-blob --key </path/to/public-key> --signature <signature> </path/to/downloaded/image>

The signature of the artifact can be found on Magic Leap 2 Developer Portal in the row corresponding to the downloaded file. The Cosign public key is available for download from Magic Leap AR Cloud.


Virtualization Support

Make sure hardware-assisted virtualization is enabled for the host machine's CPU:

grep -cw vmx /proc/cpuinfo
Expected Result

The output should be 1.

If virtualization is not enabled, follow these steps to enable it:

  1. Restart your computer
  2. Enter BIOS while the computer is booting up
  3. Find the Virtualization Technology (VTx) setting, e.g. from different versions of BIOS:
    • "Security -> System Security"
    • "System Configuration -> Device Configuration"
  4. Enable the setting
  5. Save changes and boot your OS

After enabling the Virtualization Technology (VTx) verify that it is now supported by your OS by re-running the corresponding command.


If the host machine does not support hardware-assisted virtualization, the virtual machine will not be able to run.


To be able to comfortably use the virtual machine, the following resources are needed for the guest system:

  • CPUs: 8
  • Memory: 16GB
  • Disk: 100GB

In case of performance issues, the resources can be increased after stopping the VM.


The resources mentioned above only include the virtual machine itself. If you are running it on a local machine, you need more resources for the operating system and other running software.


The following ports need to be exposed to use the provided services:

  • 80 - HTTP
  • 443 - HTTPS
  • 1883 - MQTT
  • 8883 - MQTTS

Additionally, a SSH server is configured on the virtual machine and is running on port 22. The traffic on port 22 is forwarded to port 2222 when using VirtualBox or UTM.

Depending on the runtime environment, the firewall configuration might differ:

  • Configure your local firewall (if you have one) when running on a local machine
  • Configure a cloud firewall based on documentation from your cloud provider otherwise

Runtime Environments

The virtual machine image can be run on a variety of local or cloud environments. Choose from the following supported platforms:

EnvironmentPlatformImage type
LocalLinux - VirtualBoxOVA
Windows - VirtualBoxOVA
macOS Intel (x86_64) - VirtualBoxOVA
macOS Apple Silicon (arm64) - UTMUTM
CloudGCP - Compute EngineOVA

Local Machine

The virtual machine image can be run on a laptop/desktop computer or a server that is either not virtualized or that supports nested virtualization. This approach might be suitable for developers that need to run AR Cloud locally or when it is required to have the services running inside a private network.

Download VirtualBox for Linux or for Debian-based Linux distributions on amd64 CPUs you can install VirtualBox with the following commands:

curl -sSL | sudo gpg --dearmor --yes --output /usr/share/keyrings/oracle-virtualbox-2016.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/oracle-virtualbox-2016.gpg] $(lsb_release -cs) contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list
sudo apt update
sudo apt install VirtualBox-7.0

VirtualBox Limitations


When running on Linux/MacOS using VirtualBox from a user account, ports below 1024 will not be forwarded by default due to lack of permissions.

The VirtualBox binary has the setuid bit set, so setting capabilities on it will not work. Instead, it provides an override which allows forwarding the privileged ports.

When running VirtualBox in headless mode, set the following environment variable before starting the virtual machine:


When using the graphical interface, add the above line to one of your user profile files (~/.profile, ~/.bash_profile or ~/.zprofile depending on your configuration), so it is set on login. Make sure to log out and log in again for the changes to take effect.

Alternatively, install socat and run it from the root account to forward traffic on these ports to the virtual machine:

  • When using an IP address and/or HTTP, only:

    sudo socat TCP-LISTEN:80,fork TCP:localhost:8080
  • When using a domain with a TLS certificate:

    sudo socat TCP-LISTEN:443,fork TCP:localhost:8443

Make sure the socat process is running while you are using the image.

Importing the Appliance

  1. Start VirtualBox
  2. Select File > Import Appliance... from the menu
  3. Select the downloaded OVA file from your disk and click Next
  4. Uncheck the Import hard drives as VDI option
  5. Click Finish
  6. When the appliance has finished loading, select it from the left panel and click on Start

When the virtual machine starts log in by using the credentials provided below, select a deployment option and continue from there.

You can also run the imported virtual machine in headless mode:

vboxmanage startvm arcloud-ova --type=headless

Cloud Providers

In case local machines are not available or the services need to be available publicly, it is also possible to deploy the virtual machine image to the supported cloud providers described below.

Make sure you have the Google Cloud CLI installed.

Check the GCP documentation or follow the steps below:

  1. Prepare details about your GCP project and user account:

    export GCP_PROJECT_ID=my-project-id
    export GCP_PROJECT_NUMBER=1234567890
  2. Enable the Cloud Build API (this creates the Cloud Build service account automatically):

    gcloud services enable --project $GCP_PROJECT_ID
  3. If your user account already has the roles/owner role on the project:

    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
    --member "user:$GCP_USER_EMAIL" \
    --role 'roles/storage.objectAdmin'
  4. Otherwise, grant the required IAM roles:

    • to your user account:

      gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member "user:$GCP_USER_EMAIL" \
      --role 'roles/storage.admin'
      gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member "user:$GCP_USER_EMAIL" \
      --role 'roles/viewer'
      gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member "user:$GCP_USER_EMAIL" \
      --role 'roles/resourcemanager.projectIamAdmin'
      gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member "user:$GCP_USER_EMAIL" \
      --role 'roles/cloudbuild.builds.editor'
    • to the Cloud Build service account:

      gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member "serviceAccount:$" \
      --role 'roles/compute.admin'
      gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member "serviceAccount:$" \
      --role 'roles/iam.serviceAccountUser'
      gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member "serviceAccount:$" \
      --role 'roles/iam.serviceAccountTokenCreator'
    • to the Compute Engine service account:

      gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member "serviceAccount:$" \
      --role 'roles/compute.storageAdmin'
      gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member "serviceAccount:$" \
      --role 'roles/storage.objectAdmin'
  5. Create a GCS bucket:

    export GCP_BUCKET_NAME=my-bucket
    gcloud storage buckets create gs://$GCP_BUCKET_NAME --project $GCP_PROJECT_ID --location $GCP_BUCKET_LOCATION
  6. Upload the OVA image to the GCS bucket:

    export GCP_OVA_IMAGE='arcloud-1.2.3.ova'
    gsutil -o GSUtil:parallel_composite_upload_threshold=250M \
    -o GSUtil:parallel_composite_upload_component_size=50M \
  7. Create a new Compute Engine instance based on the imported OVA image:

    export GCP_INSTANCE_NAME=my-instance
    export GCP_ZONE=us-central1-c
    gcloud compute instances import $GCP_INSTANCE_NAME \
    --source-uri gs://$GCP_BUCKET_NAME/$GCP_OVA_IMAGE \
    --project $GCP_PROJECT_ID \
    --zone $GCP_ZONE
  8. Make sure the necessary firewall rules are configured.


When importing a new image in the same project, you can start from step 6, but make sure to export all the variables.


The virtual machine includes a dedicated arcloud user with a password set to changeme. The password is set to expire and needs to be changed during the first login.

Key-based Authentication

Password access should be disabled entirely for all publicly accessible deployments (e.g. on GCP or AWS). Key-based authentication should be used instead.

To do this, create keys for your user accounts and modify /etc/ssh/sshd_config to include:

PasswordAuthentication no

Accessing the Running Virtual Machine

To access the virtual machine, the IP address of your machine is needed.

The IP address might differ depending on the target platform:

  • for local machines - the loopback interface address ( or the address of another network interface on the machine (e.g.
  • for cloud providers - the configured/assigned public IP of the instance

To list the available IPv4 addresses on your machine/instance, try the following command:

ip -br a | awk '/UP / { print $1, $3 }'

Verify that your Magic Leap device has an IP address assigned from the same subnet as your machine or the device is able to access one of the IP addresses from the list above (your router allows connectivity between different subnets).

Apart from using the graphical interface directly, you can also access the machine using SSH (this makes it easier to copy the generated credentials):

  • to a local virtual machine:

    ssh arcloud@<ip-address> -p 2222


    ssh arcloud@ -p 2222
  • to a cloud instance:

    ssh arcloud@<ip-address>


    ssh arcloud@

Deployment Options


The remaining commands in this guide will be executed from the instantiated virtual machine, all from the home (~/) directory.

The image can be configured to use an IP address or the default arcloud-ova.local domain with HTTP only.

Alternatively, a custom domain can be used which will either trigger the creation of a TLS certificate or will use one that is already available.

Simple Deployment - HTTP

With this approach we limit the configuration needed to access the services, but with the cost of lowered security.

Option 1. Use an IP Address Only

If you want to be able to connect to the machine from other devices, the services need to be configured to use an IP address directly.

Run the script from inside the virtual machine and provide your IP address as argument:

./ --accept-sla <ip-address>

e.g. :

./ --accept-sla

Option 2. Use the Default Domain and Configure Local DNS Overrides

In case you want to use the default domain, run the script from inside the virtual machine:

./ --accept-sla

To be able to access the services, the IP address of the machine where the image is deployed can be set as the target of the pre-configured domain.

The requirement is that the IP should be accessible from the machine in a browser.

Add the following to the bottom of your /etc/hosts file (may require sudo):

# arcloud-ova
<IP-address> arcloud-ova.local
<IP-address> smtp.arcloud-ova.local

This will only make the services available on the devices that have the override configured.

Advanced Deployment - HTTPS

This approach requires a custom domain and additional configuration in the DNS zone and on the firewall, but is a lot more secure compared to the previous options.


This is the recommended approach for all publicly accessible deployments (e.g. on GCP or AWS).

Option 1. Use a Custom Domain and Automatically Generate the Certificate

This allows the services to use a custom domain and issue a TLS certificate automatically by using cert-manager with an HTTP challenge.

  1. Point your custom domain to the IP address where the virtual machine is available and make sure that all the ports mentioned above are accessible

  2. Run the script from inside the virtual machine and provide your domain as argument:

    ./ --accept-sla <domain>

    e.g. :

    ./ --accept-sla

Option 2. Use a Custom Domain and an External Load Balancer with a Certificate

The custom domain is configured on an external Load Balancer that already has a certificate attached for that domain. Traffic from the Internet to the load balancer is encrypted using TLS with a certificate issued by the cloud provider and the load balancer forwards the traffic to AR Cloud using TLS with a self-signed certificate issued by cert-manager.

  1. Follow the instruction on how to configure an external load balancer

  2. Set the variables needed for the next commands inside the virtual machine:

    . ./
  3. Modify the Istio configuration to include headers from the external load balancer:

    cat $BUNDLE_DIR/setup/istio.yaml | envsubst | istioctl install -y --set meshConfig.defaultConfig.gatewayTopology.numTrustedProxies=2 -f -
    kubectl rollout restart -n istio-system deployment/istio-ingressgateway
  4. Modify the issuer to issue a self-signed certificate:

    yq -i 'del(.spec.acme) | .spec.selfSigned={}' $BUNDLE_DIR/setup/issuer.yaml
  5. Run the script and provide your domain as argument:

    ./ --accept-sla <domain>

    e.g. :

    ./ --accept-sla

Verify Installation

Once the AR Cloud deployment completes, the deployment script will print out the cluster information similar to:

Cluster Installation (arcloud)

Enterprise Web:


Username: aradmin
Password: <base64-encoded string>



Username: admin
Password: <base64-encoded string>


kubectl -n arcloud port-forward svc/minio 8082:81

Username: <base64-encoded string>
Password: <base64-encoded string>


kubectl -n arcloud port-forward svc/postgresql 5432:5432
psql -h -p 5432 -U postgres -W

Username: postgres
Password: <base64-encoded string>

istio-system istio-ingressgateway LoadBalancer <IPv4> <IPv4> 80:31456/TCP,443:32737/TCP,15021:31254/TCP,1883:30231/TCP,8883:32740/TCP 1d

Log in to the Enterprise Console

  1. Open the Enterprise Console URL (http://<DOMAIN>/) in a browser
  2. Enter the credentials for Enterprise Web provided by the deployment script
  3. Verify the successful login

Register an ML2 device

Web console

Perform the following steps using the web-based console:

  1. Log in to the Enterprise Console
  2. Select Devices from the top menu
  3. Click Configure to display a QR code unique for your AR Cloud instance

ML2 device

Perform the following steps from within your ML2 device:

  1. Open the Settings app
  2. Select Perception
  3. Select the QR code icon next to AR Cloud
  4. Scan the QR code displayed in the web console
  5. Wait for the process to finish and click on the Login button
  6. Enter the user account credentials in the ML2 device web browser

The Enterprise Console should show the registered device on the list.

Display Cluster Information

If you ever need to display the cluster information again, run the following script:

./ --accept-sla

Preserving the Virtual Machine State

The virtual machine is configured to preserve all the data created, changes made and configuration set during its usage (e.g. registered devices, generated maps).

For this to work, it needs to be powered off safely as is required for all physical machines. To do it, connect to the virtual machine using SSH and run the following command in the terminal:

sudo poweroff

It might take around 2 minutes to stop all the services and and turn off the virtual machine completely.


If you shut the virtual machine off from VirtualBox, UTM or the cloud vendor interface or will not wait until it closes, your data might be lost.