AR Cloud Google Cloud Deployment
This deployment strategy will provide a production-ready system using Google Cloud.
Unless otherwise specified, these instructions are assumed to be running inside a Debian/Ubuntu Linux environment.
Setup
- Debian/Ubuntu
- Windows
- MacOS
Install Linux Dependencies
sudo apt update
sudo apt install -y curl gpg sed gettext
Install the Windows Subsystem for Linux
All following installation instructions are assumed to be running inside an activated Windows Subsystem for Linux 2 environment (Debian or Ubuntu). See the following information about installing WSL 2:
wsl --install -d Ubuntu
Launch the shell of the default WSL distribution:
wsl
Disk IO from mounted paths such as /mnt/c
are known to be very slow, for this reason it is recommended to execute commands from the User's home directory.
Install Linux Dependencies
sudo apt update
sudo apt install -y curl gpg sed gettext
Install brew
(Homebrew), if needed.
brew install curl gnupg gnu-sed gettext
Google Cloud CLI
To get started as quickly as possible, refer to these simple setup steps for Google Cloud CLI.
Make sure to always use the latest version of the installed tools. As the used services are upgraded some APIs might change and/or access policies be updated and it might not be possible to complete the process without having the up-to-date CLI tools.
In case a problem occurs during the deployment of the infrastructure components or services, verify if the latest version of the CLI tool was used and try again if an upgrade is available.
Tools
- Debian/Ubuntu
- Windows
- MacOS
Helm
The minimum version requirement is 3.9.x
.
The 3.13.0
version of Helm introduced a bug in the way values are merged.
The deployment will not work with this version, so please use version 3.13.1
or newer where the issue is fixed.
Install Helm using apt
:
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Kubectl
gcloud components install gke-gcloud-auth-plugin kubectl
Helm
The minimum version requirement is 3.9.x
.
The 3.13.0
version of Helm introduced a bug in the way values are merged.
The deployment will not work with this version, so please use version 3.13.1
or newer where the issue is fixed.
Install Helm using apt
:
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Kubectl
gcloud components install gke-gcloud-auth-plugin kubectl
Helm
The minimum version requirement is 3.9.x
.
The 3.13.0
version of Helm introduced a bug in the way values are merged.
The deployment will not work with this version, so please use version 3.13.1
or newer where the issue is fixed.
brew install helm
Kubectl
gcloud components install gke-gcloud-auth-plugin kubectl
AR Cloud
Download the latest AR Cloud public release from GitHub:
LATEST_RELEASE=$(curl -sSLH 'Accept: application/json' https://github.com/magicleap/arcloud/releases/latest)
LATEST_VERSION=$(echo $LATEST_RELEASE | sed -e 's/.*"tag_name":"\([^"]*\)".*/\1/')
ARTIFACT_URL="https://github.com/magicleap/arcloud/archive/refs/tags/$LATEST_VERSION.tar.gz"
curl -sSLC - $ARTIFACT_URL | tar -xz
cd arcloud-$LATEST_VERSION
Configure Environment
If you do not have a key assigned for Quay.io, please contact Customer Care:
Configure the container registry details:
export REGISTRY_SERVER="quay.io"
export REGISTRY_USERNAME="<username>"
export REGISTRY_PASSWORD="<password>"
Set the cluster namespace where the AR Cloud components will be installed:
export NAMESPACE="arcloud"
Set the domain where AR Cloud will be available:
export DOMAIN="<your domain>"
Alternatively, make a copy of the setup/env.example
file, update the values and source it in your terminal:
cp setup/env.example setup/env.my-cluster
# use your favourite editor to update the setup/env.my-cluster file
. setup/env.my-cluster
Infrastructure Setup
Kubernetes System Recommendations
- Version
1.25.x
,1.26.x
,1.27.x
Cluster Size Requirements
Minimum | Recommended | |
---|---|---|
Application | development purposes and/or smaller maps | handling large maps and hundreds of devices simultaneously |
Node range | 2 - 6 | 4 - 12 |
Desired nodes | 4 | 8 |
vCPUs per node | 2 | 8 |
Memory per node (GiB) | 8 | 32 |
Example GCP machine types | e2-standard-2 n2-standard-2 n2d-standard-2 | e2-standard-8 n2-standard-8 n2d-standard-8 |
Different instance types can be selected, but proper functioning of the cluster is not guaranteed with ones smaller than in the minimum column above.
To manage costs, consider scaling the minimum cluster size to zero.
Environment Settings
In your terminal configure the following variables per your environment:
export GC_PROJECT_ID="your-project"
export GC_REGION="your-region"
export GC_ZONE="your-region-zone"
export GC_DNS_ZONE="your-dns-zone"
export GC_ADDRESS_NAME="your-cluster-ip"
export GC_CLUSTER_NAME="your-cluster-name"
These variables are already included in the env file described above.
Reserve a Static IP
gcloud compute addresses create "${GC_ADDRESS_NAME}" --project "${GC_PROJECT_ID}" --region "${GC_REGION}"
Retrieved the Reserved Static IP Address
export IP_ADDRESS=$(gcloud compute addresses describe "${GC_ADDRESS_NAME}" --project "${GC_PROJECT_ID}" --region "${GC_REGION}" --format 'get(address)')
echo ${IP_ADDRESS}
Assign the Static IP to a DNS Record
gcloud dns --project "${GC_PROJECT_ID}" record-sets create "${DOMAIN}" --type "A" --zone "${GC_DNS_ZONE}" --rrdatas "${IP_ADDRESS}" --ttl "30"
Create a Cluster
Be sure to create a VPC prior to running the following command and supply it as the subnetwork. Refer to Google Cloud documentation for best practices:
VPC, Subnets, and Regions / Zones
gcloud container clusters create "${GC_CLUSTER_NAME}" \
--project "${GC_PROJECT_ID}" \
--zone "${GC_ZONE}" \
--release-channel "regular" \
--machine-type "e2-standard-4" \
--num-nodes "3" \
--enable-shielded-nodes
Log in to kubectl
in the Remote Cluster
gcloud container clusters get-credentials "${GC_CLUSTER_NAME}" --project "${GC_PROJECT_ID}" --zone "${GC_ZONE}"
Confirm kubectl
is Directed at the Correct Context
kubectl config current-context
gke_{your-project}-{your-region}-{your-cluster}
The services mentioned above are subjected to billing. Please verify the associated pricing for your configuration before use.
Install Istio
Minimum Requirements:
- Istio version
1.18.x
- DNS pre-configured with corresponding certificate for TLS
- Istio Gateway configured
- MQTT Port (
8883
) open
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.18.5 sh -
cd istio-1.18.5
cat ../setup/istio.yaml | envsubst | ./bin/istioctl install -y -f -
If you received an error in the last step referring to port 8080
, the most likely cause is not having your Kubernetes services running on your host machine.
Install Istio Socket Options
kubectl -n istio-system apply -f ../setup/ingress-gateway-socket-options.yaml
Install Istio Gateway
kubectl -n istio-system apply -f ../setup/gateway.yaml
cd ../
Install ARCloud
Install Certificate Manager
This part is only required if you plan on using a custom domain with a TLS certificate.
For local deployments or when using an IP address only, it can be skipped.
export CERT_MANAGER_VERSION=1.9.1
helm upgrade --install --wait --repo https://charts.jetstack.io cert-manager cert-manager \
--version ${CERT_MANAGER_VERSION} \
--create-namespace \
--namespace cert-manager \
--set installCRDs=true
kubectl -n istio-system apply -f ./setup/issuer.yaml
cat ./setup/certificate.yaml | envsubst | kubectl -n istio-system apply -f -
Create K8s Namespace
kubectl create namespace ${NAMESPACE}
kubectl label namespace ${NAMESPACE} istio-injection=enabled
kubectl label namespace ${NAMESPACE} pod-security.kubernetes.io/audit=baseline pod-security.kubernetes.io/audit-version=v1.25 pod-security.kubernetes.io/warn=baseline pod-security.kubernetes.io/warn-version=v1.25
Create Container Registry Secret
kubectl --namespace ${NAMESPACE} delete secret container-registry --ignore-not-found
kubectl --namespace ${NAMESPACE} create secret docker-registry container-registry \
--docker-server=${REGISTRY_SERVER} \
--docker-username=${REGISTRY_USERNAME} \
--docker-password=${REGISTRY_PASSWORD}
Setup AR Cloud
If you do not have a custom domain and would like to use an IP address instead, add the --no-secure
flag to the
command below and make sure that the domain environment variable is set correctly:
export DOMAIN="<IP address from the cloud provider>"
This is heavily discouraged for publicly accessible deployments.
./setup.sh \
--set global.domain=${DOMAIN} \
--no-observability \
--accept-sla
Passing the --accept-sla
flag assumes the acceptance of the Magic Leap 2 Software License Agreement.
Verify Installation
Once the AR Cloud deployment completes, the deployment script will print out the cluster information similar to:
------------------------------
Cluster Installation (arcloud)
------------------------------
Enterprise Web:
--------------
https://<DOMAIN>/
Username: aradmin
Password: <base64-encoded string>
Keycloak:
---------
https://<DOMAIN>/auth/
Username: admin
Password: <base64-encoded string>
MinIO:
------
kubectl -n arcloud port-forward svc/minio 8082:81
https://127.0.0.1:8082/
Username: <base64-encoded string>
Password: <base64-encoded string>
PostgreSQL:
------
kubectl -n arcloud port-forward svc/postgresql 5432:5432
psql -h 127.0.0.1 -p 5432 -U postgres -W
Username: postgres
Password: <base64-encoded string>
Network:
--------
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-system istio-ingressgateway LoadBalancer <IPv4> <IPv4> 80:31456/TCP,443:32737/TCP,15021:31254/TCP,1883:30231/TCP,8883:32740/TCP 1d
Log in to the Enterprise Console
- Open the Enterprise Console URL (
https://<DOMAIN>/
) in a browser - Enter the credentials for Enterprise Web provided by the deployment script
- Verify the successful login
Register an ML2 device
Web console
Perform the following steps using the web-based console:
- Log in to the Enterprise Console
- Select Devices from the top menu
- Click Configure to display a QR code unique for your AR Cloud instance
ML2 device
Perform the following steps from within your ML2 device:
- Open the Settings app
- Select Perception
- Select the QR code icon next to AR Cloud
- Scan the QR code displayed in the web console
- Wait for the process to finish and click on the Login button
- Enter the user account credentials in the ML2 device web browser
The Enterprise Console should show the registered device on the list.
Manage Cluster Scaling
In case the cluster is not needed, the cluster nodes can be scaled down to 0 and later scaled up again. This allows to decrease the costs of the infrastructure by only having the cluster nodes running when the cluster is actually being used.
Scale the nodes down to 0:
gcloud container clusters resize "${GC_CLUSTER_NAME}" --project "${GC_PROJECT_ID}" --zone "${GC_ZONE}" --num-nodes 0
Scale the nodes up again:
gcloud container clusters resize "${GC_CLUSTER_NAME}" --project "${GC_PROJECT_ID}" --zone "${GC_ZONE}" --num-nodes 4
Troubleshooting
Status Page
Once deployed, you can use the Enterprise Console to check the status of each AR Cloud service. This page can be accessed in the navigation menu link "AR Cloud Status" or through the following URL path:
"<your domain / IP Address>"/ar-cloud-status
e.g.: http://192.198.0.0/ar-cloud-status
An external health check can be configured to monitor AR Cloud services with the following endpoints:
Service | URL | Response |
---|---|---|
Health Check (General) | /api/identity/v1/healthcheck | {"status":"ok"} |
Mapping | /api/mapping/v1/healthz | {"status":"up","version":"<version>"} |
Session Manager | /session-manager/v1/healthz | {"status":"up","version":"<version>"} |
Streaming | /streaming/v1/healthz | {"status":"up","version":"<version>"} |
Spatial Anchors | /spatial-anchors/v1/healthz | {"status":"up","version":"<version>"} |
User Identity | /identity/v1/healthz | {"status":"up","version":"<version>"} |
Device Gateway | /device-gateway/v1/healthz | {"status":"up","version":"<version>"} |
Events | /events/v1/healthz | {"status":"up","version":"<version>"} |
Run the setup script in debug mode
In some cases additional information might help finding the cause of issues with the installation. The setup script can
be run with in debug mode by adding the --debug
flag, e.g.
./setup.sh \
--set global.domain=${DOMAIN} \
--no-observability \
--accept-sla \
--debug
Show the installation information again
In case access credentials to the Enterprise Console or one of the bundled services is needed, the information shown at the end of the installation process can be printed again whenever needed:
./setup.sh --accept-sla --installation-info
Services are unable to start, because one of the volumes is full
If one of the Stateful Sets using persistent volumes (nats
, minio
, postgresql
) is unable to run correctly, it might mean
the volume is full and needs to be resized.
Using minio
as an example, follow the steps to resize the data-minio-0
persistent volume claim:
Allow volume resizing for the default storage class:
kubectl patch sc gp2 -p '{"allowVolumeExpansion": true}'
Resize the
minio
volume:kubectl patch pvc data-minio-0 -n arcloud -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'
Track the progress of the resize operation (it will not succeed if there are no nodes available in the cluster):
kubectl get events -n arcloud --field-selector involvedObject.name=data-minio-0 -w
Verify that the new size is visible:
kubectl get pvc -n arcloud data-minio-0
Make sure the pod is running:
kubectl get pod -n arcloud minio-0
Check the disk usage of the volume on a running pod:
kubectl exec -n arcloud minio-0 -c minio -- df -h /data
Unable to complete the installation of the cluster services
Reinitialize the database in case of earlier errors installing postgresql
.
This might happen when the database deployment was reinstalled, but the database itself has not been updated. Usually
this can be detected by the installation failing when installing keycloak
. It is caused by the passwords in the secrets
not matching the ones for the users in the database.
This will remove all the data in the database!
In case the problem occurred during the initial installation, it is okay to proceed. Otherwise, please contact Magic Leap support to make sure none of your data is lost.
Uninstall
postgresql
:helm uninstall -n arcloud postgresql
Delete the persistent volume for the database:
kubectl delete pvc -n arcloud data-postgresql-0
Run the installation again using the process described above.
Problems accessing the Enterprise Console
Some content might have been cached in your web browser.
Open the developer console and disable cache (that way everything gets refreshed):
- Chrome (Disable cache):
- Firefox (Disable HTTP Cache): https://firefox-source-docs.mozilla.org/devtools-user/settings/index.html
Alternatively, use a guest/separate user profile:
- Chrome: https://support.google.com/chrome/answer/6130773
- Firefox: https://support.mozilla.org/en-US/kb/profile-manager-create-remove-switch-firefox-profiles
Support
In case you need help, please: