Skip to main content
Version: 12 Dec 2024

AR Cloud Customization and Security

Secure Deployment Best Practices

Magic Leap recommends reviewing the installed infrastructure to align with security best practices listed below.

What To Avoid?

  • Avoid permissive IAM policies in your environment
  • Avoid hosting AR Cloud on public IPs
  • Avoid public IPs for nodes
  • Avoid using a domain without a TLS certificate (one can be automatically issued using cert-manager)
  • Avoid allowing all traffic to the cluster on the firewall or disabling the firewall completely

General Pointers

  • Deploy the system on its own namespace
  • Isolate the deployment’s namespace from other deployed assets on the network level
  • Limit access to relevant container registries only
  • Make sure to run nodes running Apparmor with Container OS for the host nodes (or other minimal OS)
  • Keep all components up-to-date

Advanced Setup

The instruction on other pages are meant to get AR Cloud running quickly and in its simplest manor. However, AR Cloud is built to be flexible and can support many configurations. For example, external object storage solutions can be used instead of MinIO or managed PostgreSQL instances with high availability and integrated backups.

Managed Database

The following steps outline the steps for connecting AR Cloud to the managed database instance.

note

These steps only apply to a new installation of AR Cloud.

PostgreSQL Minimum Requirements

  • PostgreSQL Version: 14+
  • PostGIS Version: 3.3+
note

The PostGIS extension must be enabled on the arcloud database.

Database Configuration

  • Review and configure all settings within the ./scripts/setup-database.sh script.
  • Execute the ./scripts/setup-database.sh script against the managed database instance.
  • Create Kubernetes database secrets for each application within your AR Cloud namespace. Secret names are referenced for each AR Cloud application, see the values.yaml file postgresql.existingSecret keys.

AR Cloud Setup

When running the ./setup.sh script, you will need to supply the following additional settings in order to disable the default installation of postgresql, and point application connections to the managed database.

./setup.sh ... --set postgresql.enabled=false,global.postgresql.host=${POSTGRESQL_HOST},global.postgresql.port=${POSTGRESQL_PORT}

External Load Balancer

Depending on the cloud provider the configuration is different, but in either case a custom domain, a static IP address and a DNS record are needed.

The setup will use zonal Network Endpoint Groups (NEGs) with an Application Load Balancer and a proxy Network Load Balancer. Both of the load balancers are configured to terminate TLS using a certificate issued by the cloud provider and connect to AR Cloud also over TLS. The AR Cloud services use a self-signed certificate for the communication with the external load balancers to be able to utilize the HTTP2 protocol.

Appropriate permissions are needed to create the following resources:

  • External static IP addresses
  • DNS zone records (if handled in GCP)
  • Firewall rules
  • Network endpoint groups
  • Health checks
  • Backend services
  • Load balancers

Follow the steps below to create all the necessary resources from a machine, where gcloud is configured:

  1. Set environment variables:

    export DOMAIN="my.domain.com"

    # existing resources
    export GCP_PROJECT_ID="your-project"
    export GCP_REGION="your-region"
    export GCP_ZONE="your-region-zone"
    export GCP_DNS_ZONE="your-dns-zone"
    export GCP_INSTANCE_NAME="your-instance-name"

    # resources that will be created (all the values can be adjusted below)
    export GCP_IP_ADDRESS="arcloud-static-ip"
    export GCP_CERTIFICATE="arcloud-certificate"
    export GCP_FIREWALL_RULE="arcloud-rule"
    export GCP_NEG_HTTPS="arcloud-neg-https"
    export GCP_NEG_MQTTS="arcloud-neg-mqtts"
    export GCP_HEALTH_CHECK="arcloud-health-check"
    export GCP_BACKEND_HTTPS="arcloud-backend-https"
    export GCP_BACKEND_MQTTS="arcloud-backend-mqtts"
    export GCP_URL_MAP="arcloud-url-map"
    export GCP_TARGET_PROXY_HTTPS="arcloud-target-proxy-https"
    export GCP_TARGET_PROXY_SSL="arcloud-target-proxy-ssl"
    export GCP_FORWARDING_RULE_HTTPS="arcloud-forwarding-rule-https"
    export GCP_FORWARDING_RULE_MQTTS="arcloud-forwarding-rule-mqtts"
  2. Reserve an external static IP address:

    gcloud compute addresses create "${GCP_IP_ADDRESS}" \
    --project "${GCP_PROJECT_ID}" \
    --global
  3. Retrieve the reserved static IP address:

    export IP_ADDRESS=$(gcloud compute addresses describe "${GCP_IP_ADDRESS}" --project "${GCP_PROJECT_ID}" --global --format 'get(address)')
    echo ${IP_ADDRESS}
  4. Assign the static IP address to a DNS record if your DNS zone is managed in GCP:

    gcloud dns record-sets create "${DOMAIN}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_DNS_ZONE}" \
    --type "A" \
    --rrdatas "${IP_ADDRESS}" \
    --ttl "30"

    Alternatively, create this record manually in your DNS provider configuration.

  5. Create a Google-managed TLS certificate:

    gcloud compute ssl-certificates create "${GCP_CERTIFICATE}" \
    --project "${GCP_PROJECT_ID}" \
    --domains "${DOMAIN}"
  6. Create a firewall rule to allow health checks and Google Front Ends to access the services:

    gcloud compute firewall-rules create "${GCP_FIREWALL_RULE}" \
    --project "${GCP_PROJECT_ID}" \
    --allow tcp:443,tcp:8883,tcp:15021 \
    --source-ranges "35.191.0.0/16,130.211.0.0/22" \
    --description "Allow TCP ingress traffic to AR Cloud from health checks and GFEs"
  7. Create 2 zonal Network Endpoint Groups (NEGs) for the zone where your VM instance is running and ports 443 and 8883 (adjust the network and subnet if needed):

    gcloud compute network-endpoint-groups create "${GCP_NEG_HTTPS}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_ZONE}" \
    --network default \
    --subnet default \
    --network-endpoint-type gce-vm-ip-port \
    --default-port 443
    gcloud compute network-endpoint-groups create "${GCP_NEG_MQTTS}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_ZONE}" \
    --network default \
    --subnet default \
    --network-endpoint-type gce-vm-ip-port \
    --default-port 8883
  8. Add a network endpoint to each of the Network Endpoint Groups (NEGs) pointing to the VM instance where AR Cloud is running - use the instance internal IP address and the default port:

    gcloud compute network-endpoint-groups update "${GCP_NEG_HTTPS}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_ZONE}" \
    --add-endpoint instance="${GCP_INSTANCE_NAME}"
    gcloud compute network-endpoint-groups update "${GCP_NEG_MQTTS}" \
    --project "${GCP_PROJECT_ID}" \
    --zone "${GCP_ZONE}" \
    --add-endpoint instance="${GCP_INSTANCE_NAME}"
  9. Create a health check to use with the load balancers:

    gcloud compute health-checks create http "${GCP_HEALTH_CHECK}" \
    --project "${GCP_PROJECT_ID}" \
    --request-path "/healthz/ready" \
    --port 15021
  10. Create a classic external Application Load Balancer for the HTTPS traffic:

    • Create a backend service:

      gcloud compute backend-services create "${GCP_BACKEND_HTTPS}" \
      --project "${GCP_PROJECT_ID}" \
      --global \
      --protocol HTTP2 \
      --health-checks "${GCP_HEALTH_CHECK}" \
      --timeout 3600s \
      --connection-draining-timeout 300s
    • Add the zonal Network Endpoint Group (NEG) to the backend service:

      gcloud compute backend-services add-backend "${GCP_BACKEND_HTTPS}" \
      --project "${GCP_PROJECT_ID}" \
      --global \
      --balancing-mode RATE \
      --max-rate-per-endpoint 100 \
      --network-endpoint-group "${GCP_NEG_HTTPS}" \
      --network-endpoint-group-zone "${GCP_ZONE}"
    • Create a URL map for the backend service:

      gcloud compute url-maps create "${GCP_URL_MAP}" \
      --project "${GCP_PROJECT_ID}" \
      --default-service "${GCP_BACKEND_HTTPS}"
    • Create a target HTTPS proxy:

      gcloud compute target-https-proxies create "${GCP_TARGET_PROXY_HTTPS}" \
      --project "${GCP_PROJECT_ID}" \
      --ssl-certificates "${GCP_CERTIFICATE}" \
      --url-map "${GCP_URL_MAP}"
    • Create a forwarding rule for the HTTPS proxy:

      gcloud compute forwarding-rules create "${GCP_FORWARDING_RULE_HTTPS}" \
      --project "${GCP_PROJECT_ID}" \
      --address "${GCP_IP_ADDRESS}" \
      --target-https-proxy "${GCP_TARGET_PROXY_HTTPS}" \
      --global \
      --ports 443
  11. Create a classic proxy Network Load Balancer for the MQTTS traffic:

    • Create a backend service:

      gcloud compute backend-services create "${GCP_BACKEND_MQTTS}" \
      --project "${GCP_PROJECT_ID}" \
      --global \
      --protocol SSL \
      --port-name mqtts \
      --health-checks "${GCP_HEALTH_CHECK}" \
      --timeout 5m \
      --connection-draining-timeout 300s
    • Add the zonal Network Endpoint Group (NEG) to the backend service:

      gcloud compute backend-services add-backend "${GCP_BACKEND_MQTTS}" \
      --project "${GCP_PROJECT_ID}" \
      --global \
      --balancing-mode CONNECTION \
      --max-connections-per-endpoint 50 \
      --network-endpoint-group "${GCP_NEG_MQTTS}" \
      --network-endpoint-group-zone "${GCP_ZONE}"
    • Create a target SSL proxy:

      gcloud compute target-ssl-proxies create "${GCP_TARGET_PROXY_SSL}" \
      --project "${GCP_PROJECT_ID}" \
      --backend-service "${GCP_BACKEND_MQTTS}" \
      --ssl-certificates "${GCP_CERTIFICATE}" \
      --proxy-header NONE
    • Create a forwarding rule for the SSL proxy:

      gcloud compute forwarding-rules create "${GCP_FORWARDING_RULE_MQTTS}" \
      --project "${GCP_PROJECT_ID}" \
      --address "${GCP_IP_ADDRESS}" \
      --target-ssl-proxy "${GCP_TARGET_PROXY_SSL}" \
      --global \
      --ports 8883

Usage

Common steps

  1. Log in to your Keycloak instance using the URL and credentials provided during the installation.
  2. Select the magicleap realm in the top left corner of the page.
Selecting the magicleap realm in Keycloak

Modify Password Policy

The default password policy requires passwords with 8 to 64 characters and at least 1 lowercase letter, 1 uppercase letter and 1 digit.

To modify the default password policy:

  1. Follow the common steps to log in to Keycloak and select the magicleap realm.
  2. Click on Authentication in the left menu.
  3. Click on the Policies tab.
  4. Set the password requirements, remove selected policies or add new ones and click on the Save button.
Password policy configuration in Keycloak

Change Session Expiration Times

The session idle timeout and session maximum lifespan for connecting ML2 devices are both set to 30 days by default. This means that a device will be logged out from AR Cloud after 30 days of inactivity and that one session cannot be used for longer than 30 days.
The values can be modified, but depending on their new length, the changes might need to be done both in the dedicated ml2 OAuth client and the magicleap realm as well. This is caused by the fact that the session times for a client cannot be longer than the values set for a realm.

To set the session expiration times to more than 30 days:

  1. Follow the common steps to log in to Keycloak and select the magicleap realm.
  2. Click on Realm settings in the left menu.
  3. Click on the Sessions tab.
  4. Set the new values for SSO Session Idle and SSO Session Max (the time a session might be idle before it expires and the maximum time before a session expires).
  5. Click on the Save button at the bottom of the page.
  6. Click on Clients in the left menu.
  7. Click on the ml2 client on the list.
  8. Click on the Advanced tab.
  9. Click on Advanced Settings in the right menu.
  10. Set the new idle timeout in the Client Token Idle field and maximum lifespan in the Client Token Max field.
  11. Click on the Save button for the current section below.
Session expiration times for a realm

To decrease the session expiration times:

  1. Follow the common steps to log in to Keycloak and select the magicleap realm.
  2. Click on Clients in the left menu.
  3. Click on the ml2 client on the list.
  4. Click on the Advanced tab.
  5. Click on Advanced Settings in the right menu.
  6. Set the new idle timeout in the Client Token Idle field and maximum lifespan in the Client Token Max field.
  7. Click on the Save button for the current section below.
Token expiration times for a client

Create New Users

External Identity Provider

In case automatic creation of users based on an external Identity Provider is needed, follow the SSO integration guide.

To create a new user:

  1. Follow the common steps to log in to Keycloak and select the magicleap realm.
  2. Click on Users in the left menu.
  3. Click on the Add user button.
  4. Fill in the email, set it as verified to skip the verification email and click on the Create button.
  5. Click on the Credentials tab.
  6. Click on the Set password button.
  7. Fill in the password twice, unset the Temporary switch to make it permanent and click on the Save button.
User without a password in Keycloak
Set password for a user in Keycloak

Change User Passwords

To reset the password for a user:

  1. Follow the common steps to log in to Keycloak and select the magicleap realm.
  2. Click on Users in the left menu.
  3. Click on the user you wish to modify.
  4. Click on the Credentials tab.
  5. Click on the Reset password button.
  6. Fill in the password twice, unset the Temporary switch to make it permanent and click on the Save button.
User with a password in Keycloak
Reset password for a user in Keycloak