Skip to main content
Version: 20 Mar 2024

AR Cloud Customization and Security

Secure Deployment Best Practices

Magic Leap recommends reviewing the installed infrastructure to align with security best practices listed below.

What To Avoid?

  • Avoid permissive IAM policies in your environment
  • Avoid hosting AR Cloud on public IPs
  • Avoid public IPs for nodes
  • Avoid using a domain without a TLS certificate (one can be automatically issued using cert-manager)
  • Avoid allowing all traffic to the cluster on the firewall or disabling the firewall completely

General Pointers

  • Deploy the system on its own namespace
  • Isolate the deployment’s namespace from other deployed assets on the network level
  • Limit access to relevant container registries only
  • Make sure to run nodes running Apparmor with Container OS for the host nodes (or other minimal OS)
  • Keep all components up-to-date

Advanced Setup

The instruction on other pages are meant to get AR Cloud running quickly and in its simplest manor. However, AR Cloud is built to be flexible and can support many configurations. For example, external object storage solutions can be used instead of MinIO or managed PostgreSQL instances with high availability and integrated backups.

Managed Database

The following steps outline the steps for connecting AR Cloud to the managed database instance.


These steps only apply to a new installation of AR Cloud.

PostgreSQL Minimum Requirements

  • PostgreSQL Version: 14+
  • PostGIS Version: 3.3+

The PostGIS extension must be enabled on the arcloud database.

Database Configuration

  • Review and configure all settings within the ./scripts/ script.
  • Execute the ./scripts/ script against the managed database instance.
  • Create Kubernetes database secrets for each application within your AR Cloud namespace. Secret names are referenced for each AR Cloud application, see the values.yaml file postgresql.existingSecret keys.

AR Cloud Setup

When running the ./ script, you will need to supply the following additional settings in order to disable the default installation of postgresql, and point application connections to the managed database.

./ ... --set postgresql.enabled=false,${POSTGRESQL_HOST},global.postgresql.port=${POSTGRESQL_PORT}

External Load Balancer

Depending on the cloud provider the configuration is different, but in either case a custom domain, a static IP address and a DNS record are needed.

The setup will use zonal Network Endpoint Groups (NEGs) with an Application Load Balancer and a proxy Network Load Balancer. Both of the load balancers are configured to terminate TLS using a certificate issued by the cloud provider and connect to AR Cloud also over TLS. The AR Cloud services use a self-signed certificate for the communication with the external load balancers to be able to utilize the HTTP2 protocol.

Appropriate permissions are needed to create the following resources:

  • External static IP addresses
  • DNS zone records (if handled in GCP)
  • Firewall rules
  • Network endpoint groups
  • Health checks
  • Backend services
  • Load balancers

Follow the steps below to create all the necessary resources from a machine, where gcloud is configured:

  1. Set environment variables:

    export DOMAIN=""

    # existing resources
    export GCP_PROJECT_ID="your-project"
    export GCP_REGION="your-region"
    export GCP_ZONE="your-region-zone"
    export GCP_DNS_ZONE="your-dns-zone"
    export GCP_INSTANCE_NAME="your-instance-name"

    # resources that will be created (all the values can be adjusted below)
    export GCP_IP_ADDRESS="arcloud-static-ip"
    export GCP_CERTIFICATE="arcloud-certificate"
    export GCP_FIREWALL_RULE="arcloud-rule"
    export GCP_NEG_HTTPS="arcloud-neg-https"
    export GCP_NEG_MQTTS="arcloud-neg-mqtts"
    export GCP_HEALTH_CHECK="arcloud-health-check"
    export GCP_BACKEND_HTTPS="arcloud-backend-https"
    export GCP_BACKEND_MQTTS="arcloud-backend-mqtts"
    export GCP_URL_MAP="arcloud-url-map"
    export GCP_TARGET_PROXY_HTTPS="arcloud-target-proxy-https"
    export GCP_TARGET_PROXY_SSL="arcloud-target-proxy-ssl"
    export GCP_FORWARDING_RULE_HTTPS="arcloud-forwarding-rule-https"
    export GCP_FORWARDING_RULE_MQTTS="arcloud-forwarding-rule-mqtts"
  2. Reserve an external static IP address:

    gcloud compute addresses create "${GCP_IP_ADDRESS}" \
    --project "${GCP_PROJECT_ID}" \
  3. Retrieve the reserved static IP address:

    export IP_ADDRESS=$(gcloud compute addresses describe "${GCP_IP_ADDRESS}" --project "${GCP_PROJECT_ID}" --global --format 'get(address)')
    echo ${IP_ADDRESS}
  4. Assign the static IP address to a DNS record if your DNS zone is managed in GCP:

    gcloud dns record-sets create "${DOMAIN}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_DNS_ZONE}" \
    --type "A" \
    --rrdatas "${IP_ADDRESS}" \
    --ttl "30"

    Alternatively, create this record manually in your DNS provider configuration.

  5. Create a Google-managed TLS certificate:

    gcloud compute ssl-certificates create "${GCP_CERTIFICATE}" \
    --project "${GCP_PROJECT_ID}" \
    --domains "${DOMAIN}"
  6. Create a firewall rule to allow health checks and Google Front Ends to access the services:

    gcloud compute firewall-rules create "${GCP_FIREWALL_RULE}" \
    --project "${GCP_PROJECT_ID}" \
    --allow tcp:443,tcp:8883,tcp:15021 \
    --source-ranges "," \
    --description "Allow TCP ingress traffic to AR Cloud from health checks and GFEs"
  7. Create 2 zonal Network Endpoint Groups (NEGs) for the zone where your VM instance is running and ports 443 and 8883 (adjust the network and subnet if needed):

    gcloud compute network-endpoint-groups create "${GCP_NEG_HTTPS}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_ZONE}" \
    --network default \
    --subnet default \
    --network-endpoint-type gce-vm-ip-port \
    --default-port 443
    gcloud compute network-endpoint-groups create "${GCP_NEG_MQTTS}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_ZONE}" \
    --network default \
    --subnet default \
    --network-endpoint-type gce-vm-ip-port \
    --default-port 8883
  8. Add a network endpoint to each of the Network Endpoint Groups (NEGs) pointing to the VM instance where AR Cloud is running - use the instance internal IP address and the default port:

    gcloud compute network-endpoint-groups update "${GCP_NEG_HTTPS}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_ZONE}" \
    --add-endpoint instance="${GCP_INSTANCE_NAME}"
    gcloud compute network-endpoint-groups update "${GCP_NEG_MQTTS}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_ZONE}" \
    --add-endpoint instance="${GCP_INSTANCE_NAME}"
  9. Create a health check to use with the load balancers:

    gcloud compute health-checks create http "${GCP_HEALTH_CHECK}" \
    --project "${GCP_PROJECT_ID}" \
    --request-path "/healthz/ready" \
    --port 15021
  10. Create a classic external Application Load Balancer for the HTTPS traffic:

    • Create a backend service:

      gcloud compute backend-services create "${GCP_BACKEND_HTTPS}" \
      --project "${GCP_PROJECT_ID}" \
      --global \
      --protocol HTTP2 \
      --health-checks "${GCP_HEALTH_CHECK}" \
      --timeout 3600s \
      --connection-draining-timeout 300s
    • Add the zonal Network Endpoint Group (NEG) to the backend service:

      gcloud compute backend-services add-backend "${GCP_BACKEND_HTTPS}" \
      --project "${GCP_PROJECT_ID}" \
      --global \
      --balancing-mode RATE \
      --max-rate-per-endpoint 100 \
      --network-endpoint-group "${GCP_NEG_HTTPS}" \
      --network-endpoint-group-zone "${GCP_ZONE}"
    • Create a URL map for the backend service:

      gcloud compute url-maps create "${GCP_URL_MAP}" \
      --project "${GCP_PROJECT_ID}" \
      --default-service "${GCP_BACKEND_HTTPS}"
    • Create a target HTTPS proxy:

      gcloud compute target-https-proxies create "${GCP_TARGET_PROXY_HTTPS}" \
      --project "${GCP_PROJECT_ID}" \
      --ssl-certificates "${GCP_CERTIFICATE}" \
      --url-map "${GCP_URL_MAP}"
    • Create a forwarding rule for the HTTPS proxy:

      gcloud compute forwarding-rules create "${GCP_FORWARDING_RULE_HTTPS}" \
      --project "${GCP_PROJECT_ID}" \
      --address "${GCP_IP_ADDRESS}" \
      --target-https-proxy "${GCP_TARGET_PROXY_HTTPS}" \
      --global \
      --ports 443
  11. Create a classic proxy Network Load Balancer for the MQTTS traffic:

    • Create a backend service:

      gcloud compute backend-services create "${GCP_BACKEND_MQTTS}" \
      --project "${GCP_PROJECT_ID}" \
      --global \
      --protocol SSL \
      --port-name mqtts \
      --health-checks "${GCP_HEALTH_CHECK}" \
      --timeout 5m \
      --connection-draining-timeout 300s
    • Add the zonal Network Endpoint Group (NEG) to the backend service:

      gcloud compute backend-services add-backend "${GCP_BACKEND_MQTTS}" \
      --project "${GCP_PROJECT_ID}" \
      --global \
      --balancing-mode CONNECTION \
      --max-connections-per-endpoint 50 \
      --network-endpoint-group "${GCP_NEG_MQTTS}" \
      --network-endpoint-group-zone "${GCP_ZONE}"
    • Create a target SSL proxy:

      gcloud compute target-ssl-proxies create "${GCP_TARGET_PROXY_SSL}" \
      --project "${GCP_PROJECT_ID}" \
      --backend-service "${GCP_BACKEND_MQTTS}" \
      --ssl-certificates "${GCP_CERTIFICATE}" \
      --proxy-header NONE
    • Create a forwarding rule for the SSL proxy:

      gcloud compute forwarding-rules create "${GCP_FORWARDING_RULE_MQTTS}" \
      --project "${GCP_PROJECT_ID}" \
      --address "${GCP_IP_ADDRESS}" \
      --target-ssl-proxy "${GCP_TARGET_PROXY_SSL}" \
      --global \
      --ports 8883