AR Cloud Custom Deployment
This type of deployment is appropriate for any edge computing, on-premises, or any other deployment strategy that does not involve Google Cloud, AWS or Azure.
Unless otherwise specified, these instructions are assumed to be running inside a Debian/Ubuntu Linux environment.
Setup
- Debian/Ubuntu
- Windows
Install Linux Dependencies
sudo apt update
sudo apt install -y curl gpg sed gettext
Install the Windows Subsystem for Linux
All following installation instructions are assumed to be running inside an activated Windows Subsystem for Linux 2 environment (Debian or Ubuntu). See the following information about installing WSL 2:
wsl --install -d Ubuntu
Launch the shell of the default WSL distribution:
wsl
Disk IO from mounted paths such as /mnt/c
are known to be very slow, for this reason it is recommended to execute commands from the User's home directory.
Configure WSL
Download the custom kernel for WSL and save it on disk C:\
.
Create or edit the global WSL configuration file for your current user:
using Command Prompt:
notepad %UserProfile%/.wslconfig
using PowerShell:
notepad $env:USERPROFILE/.wslconfig
Use the following configuration for WSL (adjust the kernel path if needed):
[wsl2]
memory=16GB
processors=5
kernel=C:\\wsl2-kernel-with-istio-dns-support
localhostForwarding=false
Restart WSL for the changes to take effect:
wsl --shutdown
wsl
Verify that the new kernel is used:
uname -r
The output should be 5.15.90.1-k8s-optimized-WSL2+.
Install Linux Dependencies
sudo apt update
sudo apt install -y curl gpg sed gettext
Docker
- Debian/Ubuntu
- Windows
curl https://releases.rancher.com/install-docker/20.10.sh | sh
Post-installation step:
Install Docker Desktop with the WSL 2 backend.
Integration with Docker for non-default WSL distributions needs to be explicitly enabled in the Docker Desktop settings:
Tools
- Debian/Ubuntu
- Windows
Helm
The minimum version requirement is 3.9.x
.
The 3.13.0
version of Helm introduced a bug in the way values are merged.
The deployment will not work with this version, so please use version 3.13.1
or newer where the issue is fixed.
Install Helm using apt
:
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Helm
The minimum version requirement is 3.9.x
.
The 3.13.0
version of Helm introduced a bug in the way values are merged.
The deployment will not work with this version, so please use version 3.13.1
or newer where the issue is fixed.
Install Helm using apt
:
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
AR Cloud
Download the latest AR Cloud public release from GitHub:
LATEST_RELEASE=$(curl -sSLH 'Accept: application/json' https://github.com/magicleap/arcloud/releases/latest)
LATEST_VERSION=$(echo $LATEST_RELEASE | sed -e 's/.*"tag_name":"\([^"]*\)".*/\1/')
ARTIFACT_URL="https://github.com/magicleap/arcloud/archive/refs/tags/$LATEST_VERSION.tar.gz"
curl -sSLC - $ARTIFACT_URL | tar -xz
cd arcloud-$LATEST_VERSION
Configure Environment
If you do not have a key assigned for Quay.io, please contact Customer Care:
Configure the container registry details:
export REGISTRY_SERVER="quay.io"
export REGISTRY_USERNAME="<username>"
export REGISTRY_PASSWORD="<password>"
Set the cluster namespace where the AR Cloud components will be installed:
export NAMESPACE="arcloud"
Alternatively, make a copy of the
setup/env.example
file, update the values and
source it in your terminal:
cp setup/env.example setup/env.my-cluster
# use your favourite editor to update the setup/env.my-cluster file
. setup/env.my-cluster
Infrastructure Setup
Kubernetes Version Requirements
The minimum supported version is 1.25.5
, but it is recommeneded to use at least version
1.27.3
.
Cluster Size Requirements
Minimum | Recommended | |
---|---|---|
Application | development purposes and/or smaller maps | handling large maps and hundreds of devices simultaneously |
Node range | 2 - 6 | 4 - 12 |
Desired nodes | 4 | 8 |
vCPUs per node | 2 | 8 |
Memory per node (GiB) | 8 | 32 |
Prepare Your IP Address
The IP address might differ depending on the target platform:
- for local machines - the loopback interface address (
127.0.0.1
) or the address of another network interface on the machine (e.g.192.168.1.101
) - for cloud providers - the configured/assigned public IP of the instance
To list the available IPv4 addresses on your machine/instance, try the following command:
- Debian/Ubuntu
- Windows
ip -br a | awk '/UP / { print $1, $3 }'
ipconfig /all | findstr /i "ipv4"
Verify that your Magic Leap device has an IP address assigned from the same subnet as your machine or the device is able to access one of the IP addresses from the list above (your router allows connectivity between different subnets).
Set the IP address where AR Cloud will be available:
export DOMAIN="<IPv4 address of your active network adapter>"
The DOMAIN
variable is already included in the env file described above.
Install Kubernetes
Recommended Resources:
- 8 CPUs
- 32 GB memory
If your computer is connected to more than one network interface (example: WiFi and Ethernet), select which network IP you want to receive the Kubernetes-related traffic.
- Debian/Ubuntu
- Windows
Remove previous Rancher K3s Kubernetes installation (skip if you do not have K3s installed):
/usr/local/bin/k3s-uninstall.sh
Set the version of K3s to be installed:
export INSTALL_K3S_VERSION=v1.27.3+k3s1
Run setup script:
curl -sfL https://get.k3s.io | sh -s - \
--docker \
--disable traefik \
--write-kubeconfig-mode 600 \
--node-external-ip ${DOMAIN}
Configure K3s service:
sudo rm -rf $HOME/.kube/config && mkdir -p $HOME/.kube
sudo ln -s /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chmod 600 $HOME/.kube/config
Verify that the K3s service is running:
systemctl status k3s
Enable Kubernetes on Docker Desktop.
On future runs of AR Cloud setup processes, it will be important to make sure that Docker and the Kubernetes services are started.
Install Istio
- Debian/Ubuntu
- Windows
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.18.5 sh -
cd istio-1.18.5
cat ../setup/istio.yaml | envsubst | ./bin/istioctl install -y -f -
If you received an error in the last step referring to port 8080
, the most likely cause is not having your Kubernetes services running on your host machine.
Update the Istio configuration for it to work with WSL:
sed -ri '/values:/{n;s/(^\s+)(gateways:)/\1global:\n\1 proxy:\n\1 privileged: true\n\1\2/}' ./setup/istio.yaml
Install Istio:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.18.5 sh -
cd istio-1.18.5
cat ../setup/istio.yaml | envsubst | ./bin/istioctl install -y -f -
If you received an error in the last step referring to port 8080
, the most likely cause is not having your Kubernetes services running on your host machine.
Install Istio Socket Options
kubectl -n istio-system apply -f ../setup/ingress-gateway-socket-options.yaml
Install Istio Gateway
kubectl -n istio-system apply -f ../setup/gateway.yaml
cd ../
Install ARCloud
Install cert-manager
This part is only required if you plan on using a custom domain with a TLS certificate issued automatically.
Make sure that you allow ingress traffic on port 80
on the firewall. By default, the challenge used to issue a
certificate temporarily exposes a web service that the issuer connects to to verify ownership of the domain. As there is
no list of IPs that the request will come from, access has to be unrestricted.
Alternatively, a DNS challenge can be configured by modifying
the setup/issuer.yaml
file used below.
For local deployments or when using an IP address only, it can be skipped.
Set the version to be installed:
export CERT_MANAGER_VERSION=1.9.1
Install the helm chart, create the namespace and CRDs:
helm upgrade --install --wait --repo https://charts.jetstack.io cert-manager cert-manager \
--version ${CERT_MANAGER_VERSION} \
--create-namespace \
--namespace cert-manager \
--set installCRDs=true
Deploy the issuer with a HTTP challenge:
kubectl -n istio-system apply -f ./setup/issuer.yaml
Deploy the certificate:
cat ./setup/certificate.yaml | envsubst | kubectl -n istio-system apply -f -
Create K8s Namespace
kubectl create namespace ${NAMESPACE}
kubectl label namespace ${NAMESPACE} istio-injection=enabled
kubectl label namespace ${NAMESPACE} pod-security.kubernetes.io/audit=baseline pod-security.kubernetes.io/audit-version=v1.25 pod-security.kubernetes.io/warn=baseline pod-security.kubernetes.io/warn-version=v1.25
Create Container Registry Secret
kubectl --namespace ${NAMESPACE} delete secret container-registry --ignore-not-found
kubectl --namespace ${NAMESPACE} create secret docker-registry container-registry \
--docker-server=${REGISTRY_SERVER} \
--docker-username=${REGISTRY_USERNAME} \
--docker-password=${REGISTRY_PASSWORD}
Setup AR Cloud
./setup.sh \
--set global.domain=${DOMAIN} \
--no-secure \
--no-observability \
--accept-sla
Passing the --accept-sla
flag assumes the acceptance of the Magic Leap 2 Software License Agreement.
Verify Installation
Once the AR Cloud deployment completes, the deployment script will print out the cluster information similar to:
------------------------------
Cluster Installation (arcloud)
------------------------------
Enterprise Web:
--------------
http://<DOMAIN>/
Username: aradmin
Password: <base64-encoded string>
Keycloak:
---------
http://<DOMAIN>/auth/
Username: admin
Password: <base64-encoded string>
MinIO:
------
kubectl -n arcloud port-forward svc/minio 8082:81
http://127.0.0.1:8082/
Username: <base64-encoded string>
Password: <base64-encoded string>
PostgreSQL:
------
kubectl -n arcloud port-forward svc/postgresql 5432:5432
psql -h 127.0.0.1 -p 5432 -U postgres -W
Username: postgres
Password: <base64-encoded string>
Network:
--------
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-system istio-ingressgateway LoadBalancer <IPv4> <IPv4> 80:31456/TCP,443:32737/TCP,15021:31254/TCP,1883:30231/TCP,8883:32740/TCP 1d
Log in to the Enterprise Console
- Open the Enterprise Console URL (
http://<DOMAIN>/
) in a browser - Enter the credentials for Enterprise Web provided by the deployment script
- Verify the successful login
Register an ML2 device
Web console
Perform the following steps using the web-based console:
- Log in to the Enterprise Console
- Select Devices from the top menu
- Click Configure to display a QR code unique for your AR Cloud instance
ML2 device
Perform the following steps from within your ML2 device:
- Open the Settings app
- Select Perception
- Select the QR code icon next to AR Cloud
- Scan the QR code displayed in the web console
- Wait for the process to finish and click on the Login button
- Enter the user account credentials in the ML2 device web browser
The Enterprise Console should show the registered device on the list.
Troubleshooting
Status Page
Once deployed, you can use the Enterprise Console to check the
status of each AR Cloud service. This page can be accessed in the navigation menu link "AR Cloud Status" or through the
following URL path:
<domain or IP address>/ar-cloud-status
e.g.: http://192.168.1.101/ar-cloud-status
An external health check can be configured to monitor AR Cloud services with the following endpoints:
Service | URL | Response |
---|---|---|
Health Check (General) | /api/identity/v1/healthcheck | {"status":"ok"} |
Mapping | /api/mapping/v1/healthz | {"status":"up","version":"<version>"} |
Session Manager | /session-manager/v1/healthz | {"status":"up","version":"<version>"} |
Streaming | /streaming/v1/healthz | {"status":"up","version":"<version>"} |
Spatial Anchors | /spatial-anchors/v1/healthz | {"status":"up","version":"<version>"} |
User Identity | /identity/v1/healthz | {"status":"up","version":"<version>"} |
Device Gateway | /device-gateway/v1/healthz | {"status":"up","version":"<version>"} |
Events | /events/v1/healthz | {"status":"up","version":"<version>"} |
Run the setup script in debug mode
In some cases additional information might help finding the cause of issues with the installation. The setup script can
be run in debug mode by adding the --debug
flag, e.g.
./setup.sh \
--debug \
--set global.domain=${DOMAIN} \
--no-secure \
--no-observability \
--accept-sla
Show the installation information again
In case access credentials to the Enterprise Console or one of the bundled services is needed, the information shown at the end of the installation process can be printed again whenever needed:
./setup.sh \
--installation-info \
--no-secure \
--no-observability \
--accept-sla
Unable to install Istio
In case of problems while trying to install Istio:
Make sure there is enough disk space on the cluster nodes
Restart the DNS service, because it might cause issues if Istio was recently uninstalled:
when using
kube-dns
:kubectl -n kube-system rollout restart deploy/kube-dns
when using
coredns
:kubectl -n kube-system rollout restart deploy/coredns
Unable to complete the installation of the cluster services
Depending on the service that is failing to install, the cause of the issue might be different:
postgresl
,minio
,nats
are the first services being installed, are all Stateful Sets and require persistent volumes:the container registry credentials are missing or are invalid - make sure the
REGISTRY_USERNAME
andREGISTRY_PASSWORD
environment variables are correctly set and are visible by the setup script:env | grep -i registry
a problem with the storage provisioner (the persistent volumes are not created) - verify if the provisioner is deployed properly and has permissions to create volumes; this differs for each cloud provider, so check the official documentation
there is insufficient space available in the volumes - resize the volumes
keycloak
is the first service that requires database access - reinitialize the databasemapping
andstreaming
both use big container images and require significant resources:- unable to extract the images within the default timeout of 2 minutes - as the timeout cannot be increased, use a faster disk supporting at least 2k IOPS
- insufficient resources to start the containers - increase the node size or pool size (if set to a very low value)
Services are unable to start, because one of the volumes is full
If one of the Stateful Sets using persistent volumes (nats
, minio
, postgresql
) is unable to run correctly, it might mean
the volume is full and needs to be resized.
Using minio
as an example, follow the steps to resize the data-minio-0
persistent volume claim:
Allow volume resizing for the default storage class:
kubectl patch sc local-path -p '{"allowVolumeExpansion": true}'
Resize the
minio
volume:kubectl patch pvc data-minio-0 -n arcloud -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'
Track the progress of the resize operation (it will not succeed if there are no nodes available in the cluster):
kubectl get events -n arcloud --field-selector involvedObject.name=data-minio-0 -w
Verify that the new size is visible:
kubectl get pvc -n arcloud data-minio-0
Make sure the pod is running:
kubectl get pod -n arcloud minio-0
Check the disk usage of the volume on a running pod:
kubectl exec -n arcloud minio-0 -c minio -- df -h /data
The installation of keycloak
fails
This might happen when the database deployment was reinstalled, but the database itself has not been updated. Usually
this can be detected by the installation failing when installing keycloak
. It is caused by the passwords in the secrets
not matching the ones for the users in the database. The database needs to be reinitialized to resolve the problem.
This will remove all the data in the database!
In case the problem occurred during the initial installation, it is okay to proceed. Otherwise, please contact Magic Leap support to make sure none of your data is lost.
Uninstall
postgresql
:helm uninstall -n arcloud postgresql
Delete the persistent volume for the database:
kubectl delete pvc -n arcloud data-postgresql-0
Run the installation again using the process described above.
Problems accessing the Enterprise Console
Some content might have been cached in your web browser.
Open the developer console and disable cache (that way everything gets refreshed):
- Chrome (Disable cache):
- Firefox (Disable HTTP Cache): https://firefox-source-docs.mozilla.org/devtools-user/settings/index.html
Alternatively, use a guest/separate user profile:
- Chrome: https://support.google.com/chrome/answer/6130773
- Firefox: https://support.mozilla.org/en-US/kb/profile-manager-create-remove-switch-firefox-profiles
Helpful commands
K9s
K9s provides a terminal UI to interact with your Kubernetes clusters.
In case you want to easily manage the cluster resources, install K9s:
- Debian/Ubuntu
- MacOS
k9s_version=$(curl -sSLH 'Accept: application/json' https://github.com/derailed/k9s/releases/latest | jq -r .tag_name)
k9s_archive=k9s_Linux_amd64.tar.gz
curl -sSLO https://github.com/derailed/k9s/releases/download/$k9s_version/$k9s_archive
sudo tar Cxzf /usr/local/bin $k9s_archive k9s
brew install derailed/k9s/k9s
Details about using K9s are available in the official docs.
Status of the cluster and services
List of pods including their status, restart count, IP address and assigned node:
kubectl get pods -n arcloud -o wide
List of pods that are failing:
kubectl get pods -n arcloud --no-headers | grep -Ei 'error|crashloopbackoff'
List of pods including the ready state, type of owner resources and container termination reasons:
kubectl get pods -n arcloud -o 'custom-columns=NAME:.metadata.name,READY:.status.conditions[?(@.type=="Ready")].status,OWNERS:.metadata.ownerReferences[*].kind,TERMINATION REASONS:.status.containerStatuses[*].state.terminated.reason'
Show details about a pod:
kubectl describe pod -n arcloud name-of-the-pod
e.g. for the first instance of the streaming service:
kubectl describe pod -n arcloud streaming-0
List of all events for the arcloud
namespace:
kubectl get events -n arcloud
List of events of the specified type (only warnings or regular events):
kubectl get events -n arcloud --field-selector type=Warning
kubectl get events -n arcloud --field-selector type=Normal
List of events for the specified resource kind:
kubectl get events -n arcloud --field-selector involvedObject.kind=Pod
kubectl get events -n arcloud --field-selector involvedObject.kind=Job
List of events for the specified resource name (e.g. for a pod that is failing):
kubectl get events -n arcloud --field-selector involvedObject.name=some-resource-name
e.g. for the first instance of the streaming service:
kubectl get events -n arcloud --field-selector involvedObject.name=streaming-0
Logs from the specified container of one of the AR Cloud services:
kubectl logs -n arcloud -l app\.kubernetes\.io/name=device-gateway -c device-gateway
kubectl logs -n arcloud -l app\.kubernetes\.io/name=enterprise-console-web -c enterprise-console-web
kubectl logs -n arcloud -l app\.kubernetes\.io/name=events -c events
kubectl logs -n arcloud -l app\.kubernetes\.io/name=identity-backend -c identity-backend
kubectl logs -n arcloud -l app\.kubernetes\.io/name=keycloak -c keycloak
kubectl logs -n arcloud -l app\.kubernetes\.io/name=minio -c minio
kubectl logs -n arcloud -l app\.kubernetes\.io/name=nats -c nats
kubectl logs -n arcloud -l app\.kubernetes\.io/name=session-manager -c session-manager
kubectl logs -n arcloud -l app\.kubernetes\.io/name=mapping -c mapping
kubectl logs -n arcloud -l app\.kubernetes\.io/name=mapping -l app\.kubernetes\.io/component=worker -c mapping-worker
kubectl logs -n arcloud -l app\.kubernetes\.io/name=streaming -c streaming
kubectl logs -n arcloud -l app\.kubernetes\.io/name=space-proxy -c space-proxy
kubectl logs -n arcloud -l app\.kubernetes\.io/name=spatial-anchors -c spatial-anchors
Logs from the Istio ingress gateway (last 100 for each instance or follow the logs):
kubectl logs -n istio-system -l app=istio-ingressgateway --tail 100
kubectl logs -n istio-system -l app=istio-ingressgateway -f
Resource usage of the cluster nodes:
kubectl top nodes
If the usage of the CPU or memory is reaching 100%, the cluster has to be resized by either using bigger nodes or increasing their number.
Disk usage usage of persistent volumes:
kubectl exec -n arcloud minio-0 -c minio -- df -h /data
kubectl exec -n arcloud nats-0 -c nats -- df -h /data
kubectl exec -n arcloud postgresql-0 -c postgresql -- df -h /data
If the usage of one of the volumes is reaching 100%, resize it.
Finding out what is wrong
Please follow the steps below to find the cause of issues with the cluster or AR Cloud services:
Create a new directory for the output of the subsequent commands:
mkdir output
Check events for all namespaces of type
Warning
:kubectl get events -A --field-selector type=Warning | tee output/events.log
Describe each pod that is listed above, e.g.:
kubectl describe pod -n arcloud streaming-0 | tee output/streaming-pod-details.log
kubectl describe pod -n istio-system istio-ingressgateway-b8cc646d4-rjdkk | tee output/istio-pod-details.logCheck the logs for each failing pod using the service name (check the command examples above), e.g.:
kubectl logs -n arcloud -l app\.kubernetes\.io/name=mapping -c mapping | output/mapping.log
kubectl logs -n istio-system -l app=istio-ingressgateway --tail 1000 | tee output/istio.logCreate an archive with all the results:
tar czf results.tgz output/
Check the suggestions above for solving the most common issues.
Otherwise, share the details with Customer Care using one of the methods listed below.
Support
In case you need help, please: