Skip to main content
Version: 14 May 2024

AR Cloud Google Cloud Deployment

This deployment strategy will provide a production-ready system using Google Cloud.

Linux Notice

Unless otherwise specified, these instructions are assumed to be running inside a Debian/Ubuntu Linux environment.

Setup

Install Linux Dependencies

sudo apt update
sudo apt install -y curl gpg sed gettext

Google Cloud CLI

To get started as quickly as possible, refer to these simple setup steps for Google Cloud CLI.

Latest Versions

Make sure to always use the latest version of the installed tools. As the used services are upgraded some APIs might change and/or access policies be updated and it might not be possible to complete the process without having the up-to-date CLI tools.

In case a problem occurs during the deployment of the infrastructure components or services, verify if the latest version of the CLI tool was used and try again if an upgrade is available.

Tools

Helm

Helm

The minimum version requirement is 3.9.x.

Helm 3.13.0

The 3.13.0 version of Helm introduced a bug in the way values are merged. The deployment will not work with this version, so please use version 3.13.1 or newer where the issue is fixed.

Install Helm using apt:

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Kubectl

gcloud components install gke-gcloud-auth-plugin kubectl

AR Cloud

Download the latest AR Cloud public release from GitHub:

LATEST_RELEASE=$(curl -sSLH 'Accept: application/json' https://github.com/magicleap/arcloud/releases/latest)
LATEST_VERSION=$(echo $LATEST_RELEASE | sed -e 's/.*"tag_name":"\([^"]*\)".*/\1/')
ARTIFACT_URL="https://github.com/magicleap/arcloud/archive/refs/tags/$LATEST_VERSION.tar.gz"
curl -sSLC - $ARTIFACT_URL | tar -xz
cd arcloud-$LATEST_VERSION

Configure Environment

note

If you do not have a key assigned for Quay.io, please contact Customer Care:

care@magicleap.com

Configure the container registry details:

export REGISTRY_SERVER="quay.io"
export REGISTRY_USERNAME="<username>"
export REGISTRY_PASSWORD="<password>"

Set the cluster namespace where the AR Cloud components will be installed:

export NAMESPACE="arcloud"

Set the domain where AR Cloud will be available:

export DOMAIN="<your domain>"

Alternatively, make a copy of the setup/env.example file, update the values and source it in your terminal:

cp setup/env.example setup/env.my-cluster
# use your favourite editor to update the setup/env.my-cluster file
. setup/env.my-cluster

Infrastructure Setup

Kubernetes System Recommendations

  • Version 1.25.x, 1.26.x, 1.27.x

Cluster Size Requirements

MinimumRecommended
Applicationdevelopment purposes and/or smaller mapshandling large maps and hundreds of devices simultaneously
Node range2 - 64 - 12
Desired nodes48
vCPUs per node28
Memory per node (GiB)832
Example GCP machine typese2-standard-2
n2-standard-2
n2d-standard-2
e2-standard-8
n2-standard-8
n2d-standard-8
note

Different instance types can be selected, but proper functioning of the cluster is not guaranteed with ones smaller than in the minimum column above.

To manage costs, consider scaling the minimum cluster size to zero.

Environment Settings

In your terminal configure the following variables per your environment:

export GC_PROJECT_ID="your-project"
export GC_REGION="your-region"
export GC_ZONE="your-region-zone"
export GC_DNS_ZONE="your-dns-zone"
export GC_ADDRESS_NAME="your-cluster-ip"
export GC_CLUSTER_NAME="your-cluster-name"
note

These variables are already included in the env file described above.

Firewall

The following ports need to be exposed to use the provided services:

  1. For accessing the AR Cloud API and Enterprise Console - access can be limited to allowlisted IPs, e.g. for a VPN gateway:
    • 80 - HTTP, when deploying using an IP address only (not recommended)
    • 443 - HTTPS, when deploying using a domain with a TLS certificate
  2. For connecting to AR Cloud with the ML2 device - access can be limited to allowlisted IPs, e.g. for a VPN gateway:
    • 1883 - MQTT, when deploying using an IP address only (not recommended)
    • 8883 - MQTTS, when deploying using a domain with a TLS certificate
  3. For issuing TLS certificates automatically with cert-manager - unrestricted access is needed to complete the HTTP challenge whenever a new certificate is issued:
    • 80 - HTTP

Use the following table to select the ports that need to be opened on the firewall:

Deployment type / TLS configurationPorts with IP allowlistPorts with public access
IP address80, 1883
domain without certificate80, 1883
domain with certificate issued by cert-manager using HTTP challenge (default)443, 888380
domain with certificate issued by cert-manager using DNS challenge443, 8883
domain with externally issued certificate, e.g. using external load balancer443, 8883

To limit access to specific IP ranges on all ports configured on the load balancer, modify the setup/istio.yaml file to include the following configuration:

spec:
components:
ingressGateways:
- name: istio-ingressgateway
k8s:
service:
loadBalancerSourceRanges:
- 1.2.3.4/32 # e.g. VPN gateway IP
- 10.0.0.0/22 # e.g. some IP range

Reserve a Static IP

gcloud compute addresses create "${GC_ADDRESS_NAME}" --project "${GC_PROJECT_ID}" --region "${GC_REGION}"

Retrieve the Reserved Static IP Address

export IP_ADDRESS=$(gcloud compute addresses describe "${GC_ADDRESS_NAME}" --project "${GC_PROJECT_ID}" --region "${GC_REGION}" --format 'get(address)')
echo ${IP_ADDRESS}

Assign the Static IP to a DNS Record

gcloud dns --project "${GC_PROJECT_ID}" record-sets create "${DOMAIN}" --type "A" --zone "${GC_DNS_ZONE}" --rrdatas "${IP_ADDRESS}" --ttl "30"

Create a Cluster

note

Be sure to create a VPC prior to running the following command and supply it as the subnetwork. Refer to Google Cloud documentation for best practices:

VPC, Subnets, and Regions / Zones

gcloud container clusters create "${GC_CLUSTER_NAME}" \
--project "${GC_PROJECT_ID}" \
--zone "${GC_ZONE}" \
--release-channel "regular" \
--machine-type "e2-standard-4" \
--num-nodes "3" \
--enable-shielded-nodes

Log in to kubectl in the Remote Cluster

gcloud container clusters get-credentials "${GC_CLUSTER_NAME}" --project "${GC_PROJECT_ID}" --zone "${GC_ZONE}"

Confirm kubectl is Directed at the Correct Context

kubectl config current-context
Expected response

gke_{your-project}-{your-region}-{your-cluster}

Costs

The services mentioned above are subjected to billing. Please verify the associated pricing for your configuration before use.

Install Istio

Istio

Minimum Requirements:

  • Istio version 1.18.x
  • DNS pre-configured with corresponding certificate for TLS
  • Istio Gateway configured
  • MQTT Port (8883) open
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.18.5 sh -
cd istio-1.18.5
cat ../setup/istio.yaml | envsubst | ./bin/istioctl install -y -f -
note

If you received an error in the last step referring to port 8080, the most likely cause is not having your Kubernetes services running on your host machine.

Install Istio Socket Options

kubectl -n istio-system apply -f ../setup/ingress-gateway-socket-options.yaml

Install Istio Gateway

kubectl -n istio-system apply -f ../setup/gateway.yaml
cd ../

Install ARCloud

Install cert-manager

note

This part is only required if you plan on using a custom domain with a TLS certificate issued automatically.

Make sure that you allow ingress traffic on port 80 on the firewall. By default, the challenge used to issue a certificate temporarily exposes a web service that the issuer connects to to verify ownership of the domain. As there is no list of IPs that the request will come from, access has to be unrestricted. Alternatively, a DNS challenge can be configured by modifying the setup/issuer.yaml file used below.

For local deployments or when using an IP address only, it can be skipped.

Set the version to be installed:

export CERT_MANAGER_VERSION=1.9.1

Install the helm chart, create the namespace and CRDs:

helm upgrade --install --wait --repo https://charts.jetstack.io cert-manager cert-manager \
--version ${CERT_MANAGER_VERSION} \
--create-namespace \
--namespace cert-manager \
--set installCRDs=true

Deploy the issuer with a HTTP challenge:

kubectl -n istio-system apply -f ./setup/issuer.yaml

Deploy the certificate:

cat ./setup/certificate.yaml | envsubst | kubectl -n istio-system apply -f -

Create K8s Namespace

kubectl create namespace ${NAMESPACE}
kubectl label namespace ${NAMESPACE} istio-injection=enabled
kubectl label namespace ${NAMESPACE} pod-security.kubernetes.io/audit=baseline pod-security.kubernetes.io/audit-version=v1.25 pod-security.kubernetes.io/warn=baseline pod-security.kubernetes.io/warn-version=v1.25

Create Container Registry Secret

kubectl --namespace ${NAMESPACE} delete secret container-registry --ignore-not-found
kubectl --namespace ${NAMESPACE} create secret docker-registry container-registry \
--docker-server=${REGISTRY_SERVER} \
--docker-username=${REGISTRY_USERNAME} \
--docker-password=${REGISTRY_PASSWORD}

Setup AR Cloud

IP-based deployment

If you do not have a custom domain and would like to use an IP address instead, add the --no-secure flag to the command below and make sure that the domain environment variable is set correctly:

export DOMAIN="<IP address from the cloud provider>"

This is heavily discouraged for publicly accessible deployments.

./setup.sh \
--set global.domain=${DOMAIN} \
--no-observability \
--accept-sla
Software License Agreement

Passing the --accept-sla flag assumes the acceptance of the Magic Leap 2 Software License Agreement.

Verify Installation

Once the AR Cloud deployment completes, the deployment script will print out the cluster information similar to:

------------------------------
Cluster Installation (arcloud)
------------------------------

Enterprise Web:
--------------

https://<DOMAIN>/

Username: aradmin
Password: <base64-encoded string>

Keycloak:
---------

https://<DOMAIN>/auth/

Username: admin
Password: <base64-encoded string>

MinIO:
------

kubectl -n arcloud port-forward svc/minio 8082:81
https://127.0.0.1:8082/

Username: <base64-encoded string>
Password: <base64-encoded string>

PostgreSQL:
------

kubectl -n arcloud port-forward svc/postgresql 5432:5432
psql -h 127.0.0.1 -p 5432 -U postgres -W

Username: postgres
Password: <base64-encoded string>

Network:
--------
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-system istio-ingressgateway LoadBalancer <IPv4> <IPv4> 80:31456/TCP,443:32737/TCP,15021:31254/TCP,1883:30231/TCP,8883:32740/TCP 1d

Log in to the Enterprise Console

  1. Open the Enterprise Console URL (https://<DOMAIN>/) in a browser
  2. Enter the credentials for Enterprise Web provided by the deployment script
  3. Verify the successful login

Register an ML2 device

Web console

Perform the following steps using the web-based console:

  1. Log in to the Enterprise Console
  2. Select Devices from the top menu
  3. Click Configure to display a QR code unique for your AR Cloud instance

ML2 device

Perform the following steps from within your ML2 device:

  1. Open the Settings app
  2. Select Perception
  3. Select the QR code icon next to AR Cloud
  4. Scan the QR code displayed in the web console
  5. Wait for the process to finish and click on the Login button
  6. Enter the user account credentials in the ML2 device web browser

The Enterprise Console should show the registered device on the list.

Manage Cluster Scaling

In case the cluster is not needed, the cluster nodes can be scaled down to 0 and later scaled up again. This allows to decrease the costs of the infrastructure by only having the cluster nodes running when the cluster is actually being used.

Scale the nodes down to 0:

gcloud container clusters resize "${GC_CLUSTER_NAME}" --project "${GC_PROJECT_ID}" --zone "${GC_ZONE}" --num-nodes 0

Scale the nodes up again:

gcloud container clusters resize "${GC_CLUSTER_NAME}" --project "${GC_PROJECT_ID}" --zone "${GC_ZONE}" --num-nodes 4

Troubleshooting

Status Page

Once deployed, you can use the Enterprise Console to check the status of each AR Cloud service. This page can be accessed in the navigation menu link "AR Cloud Status" or through the following URL path: <domain or IP address>/ar-cloud-status

e.g.: http://192.168.1.101/ar-cloud-status

An external health check can be configured to monitor AR Cloud services with the following endpoints:

ServiceURLResponse
Health Check (General)/api/identity/v1/healthcheck{"status":"ok"}
Mapping/api/mapping/v1/healthz{"status":"up","version":"<version>"}
Session Manager/session-manager/v1/healthz{"status":"up","version":"<version>"}
Streaming/streaming/v1/healthz{"status":"up","version":"<version>"}
Spatial Anchors/spatial-anchors/v1/healthz{"status":"up","version":"<version>"}
User Identity/identity/v1/healthz{"status":"up","version":"<version>"}
Device Gateway/device-gateway/v1/healthz{"status":"up","version":"<version>"}
Events/events/v1/healthz{"status":"up","version":"<version>"}

Run the setup script in debug mode

In some cases additional information might help finding the cause of issues with the installation. The setup script can be run in debug mode by adding the --debug flag, e.g.

./setup.sh \
--debug \
--set global.domain=${DOMAIN} \
--no-observability \
--accept-sla

Show the installation information again

In case access credentials to the Enterprise Console or one of the bundled services is needed, the information shown at the end of the installation process can be printed again whenever needed:

./setup.sh \
--installation-info \
--no-observability \
--accept-sla

Unable to install Istio

In case of problems while trying to install Istio:

  1. Make sure there is enough disk space on the cluster nodes

  2. Restart the DNS service, because it might cause issues if Istio was recently uninstalled:

    • when using kube-dns:

      kubectl -n kube-system rollout restart deploy/kube-dns
    • when using coredns:

      kubectl -n kube-system rollout restart deploy/coredns

Unable to complete the installation of the cluster services

Depending on the service that is failing to install, the cause of the issue might be different:

  1. postgresl, minio, nats are the first services being installed, are all Stateful Sets and require persistent volumes:

    • the container registry credentials are missing or are invalid - make sure the REGISTRY_USERNAME and REGISTRY_PASSWORD environment variables are correctly set and are visible by the setup script:

      env | grep -i registry
    • a problem with the storage provisioner (the persistent volumes are not created) - verify if the provisioner is deployed properly and has permissions to create volumes; this differs for each cloud provider, so check the official documentation

    • there is insufficient space available in the volumes - resize the volumes

  2. keycloak is the first service that requires database access - reinitialize the database

  3. mapping and streaming both use big container images and require significant resources:

    • unable to extract the images within the default timeout of 2 minutes - as the timeout cannot be increased, use a faster disk supporting at least 2k IOPS
    • insufficient resources to start the containers - increase the node size or pool size (if set to a very low value)

Services are unable to start, because one of the volumes is full

If one of the Stateful Sets using persistent volumes (nats, minio, postgresql) is unable to run correctly, it might mean the volume is full and needs to be resized.

Using minio as an example, follow the steps to resize the data-minio-0 persistent volume claim:

  1. Allow volume resizing for the default storage class:

    kubectl patch sc standard-rwo -p '{"allowVolumeExpansion": true}'
  2. Resize the minio volume:

    kubectl patch pvc data-minio-0 -n arcloud -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'
  3. Track the progress of the resize operation (it will not succeed if there are no nodes available in the cluster):

    kubectl get events -n arcloud --field-selector involvedObject.name=data-minio-0 -w
  4. Verify that the new size is visible:

    kubectl get pvc -n arcloud data-minio-0
  5. Make sure the pod is running:

    kubectl get pod -n arcloud minio-0
  6. Check the disk usage of the volume on a running pod:

    kubectl exec -n arcloud minio-0 -c minio -- df -h /data

The installation of keycloak fails

This might happen when the database deployment was reinstalled, but the database itself has not been updated. Usually this can be detected by the installation failing when installing keycloak. It is caused by the passwords in the secrets not matching the ones for the users in the database. The database needs to be reinitialized to resolve the problem.

Data Loss

This will remove all the data in the database!

In case the problem occurred during the initial installation, it is okay to proceed. Otherwise, please contact Magic Leap support to make sure none of your data is lost.

  1. Uninstall postgresql:

     helm uninstall -n arcloud postgresql
  2. Delete the persistent volume for the database:

    kubectl delete pvc -n arcloud data-postgresql-0
  3. Run the installation again using the process described above.

Problems accessing the Enterprise Console

Some content might have been cached in your web browser.

Open the developer console and disable cache (that way everything gets refreshed):

Alternatively, use a guest/separate user profile:

Helpful commands

K9s

K9s provides a terminal UI to interact with your Kubernetes clusters.

In case you want to easily manage the cluster resources, install K9s:

k9s_version=$(curl -sSLH 'Accept: application/json' https://github.com/derailed/k9s/releases/latest | jq -r .tag_name)
k9s_archive=k9s_Linux_amd64.tar.gz
curl -sSLO https://github.com/derailed/k9s/releases/download/$k9s_version/$k9s_archive
sudo tar Cxzf /usr/local/bin $k9s_archive k9s

Details about using K9s are available in the official docs.

Status of the cluster and services

List of pods including their status, restart count, IP address and assigned node:

kubectl get pods -n arcloud -o wide

List of pods that are failing:

kubectl get pods -n arcloud --no-headers | grep -Ei 'error|crashloopbackoff'

List of pods including the ready state, type of owner resources and container termination reasons:

kubectl get pods -n arcloud -o 'custom-columns=NAME:.metadata.name,READY:.status.conditions[?(@.type=="Ready")].status,OWNERS:.metadata.ownerReferences[*].kind,TERMINATION REASONS:.status.containerStatuses[*].state.terminated.reason'

Show details about a pod:

kubectl describe pod -n arcloud name-of-the-pod

e.g. for the first instance of the streaming service:

kubectl describe pod -n arcloud streaming-0

List of all events for the arcloud namespace:

kubectl get events -n arcloud

List of events of the specified type (only warnings or regular events):

kubectl get events -n arcloud --field-selector type=Warning
kubectl get events -n arcloud --field-selector type=Normal

List of events for the specified resource kind:

kubectl get events -n arcloud --field-selector involvedObject.kind=Pod
kubectl get events -n arcloud --field-selector involvedObject.kind=Job

List of events for the specified resource name (e.g. for a pod that is failing):

kubectl get events -n arcloud --field-selector involvedObject.name=some-resource-name

e.g. for the first instance of the streaming service:

kubectl get events -n arcloud --field-selector involvedObject.name=streaming-0

Logs from the specified container of one of the AR Cloud services:

kubectl logs -n arcloud -l app\.kubernetes\.io/name=device-gateway -c device-gateway
kubectl logs -n arcloud -l app\.kubernetes\.io/name=enterprise-console-web -c enterprise-console-web
kubectl logs -n arcloud -l app\.kubernetes\.io/name=events -c events
kubectl logs -n arcloud -l app\.kubernetes\.io/name=identity-backend -c identity-backend
kubectl logs -n arcloud -l app\.kubernetes\.io/name=keycloak -c keycloak
kubectl logs -n arcloud -l app\.kubernetes\.io/name=minio -c minio
kubectl logs -n arcloud -l app\.kubernetes\.io/name=nats -c nats
kubectl logs -n arcloud -l app\.kubernetes\.io/name=session-manager -c session-manager
kubectl logs -n arcloud -l app\.kubernetes\.io/name=mapping -c mapping
kubectl logs -n arcloud -l app\.kubernetes\.io/name=mapping -l app\.kubernetes\.io/component=worker -c mapping-worker
kubectl logs -n arcloud -l app\.kubernetes\.io/name=streaming -c streaming
kubectl logs -n arcloud -l app\.kubernetes\.io/name=space-proxy -c space-proxy
kubectl logs -n arcloud -l app\.kubernetes\.io/name=spatial-anchors -c spatial-anchors

Logs from the Istio ingress gateway (last 100 for each instance or follow the logs):

kubectl logs -n istio-system -l app=istio-ingressgateway --tail 100
kubectl logs -n istio-system -l app=istio-ingressgateway -f

Resource usage of the cluster nodes:

kubectl top nodes
caution

If the usage of the CPU or memory is reaching 100%, the cluster has to be resized by either using bigger nodes or increasing their number.

Disk usage usage of persistent volumes:

kubectl exec -n arcloud minio-0 -c minio -- df -h /data
kubectl exec -n arcloud nats-0 -c nats -- df -h /data
kubectl exec -n arcloud postgresql-0 -c postgresql -- df -h /data
caution

If the usage of one of the volumes is reaching 100%, resize it.

Finding out what is wrong

Please follow the steps below to find the cause of issues with the cluster or AR Cloud services:

  1. Create a new directory for the output of the subsequent commands:

    mkdir output
  2. Check events for all namespaces of type Warning:

    kubectl get events -A --field-selector type=Warning | tee output/events.log
  3. Describe each pod that is listed above, e.g.:

    kubectl describe pod -n arcloud streaming-0 | tee output/streaming-pod-details.log
    kubectl describe pod -n istio-system istio-ingressgateway-b8cc646d4-rjdkk | tee output/istio-pod-details.log
  4. Check the logs for each failing pod using the service name (check the command examples above), e.g.:

    kubectl logs -n arcloud -l app\.kubernetes\.io/name=mapping -c mapping | output/mapping.log
    kubectl logs -n istio-system -l app=istio-ingressgateway --tail 1000 | tee output/istio.log
  5. Create an archive with all the results:

    tar czf results.tgz output/
  6. Check the suggestions above for solving the most common issues.

  7. Otherwise, share the details with Customer Care using one of the methods listed below.

Support

In case you need help, please: