AR Cloud Virtual Machine Image Deployment (OVA, UTM)
The provided image contains all the necessary infrastructure and services pre-configured to be able to manage and work with the Magic Leap devices. This allows to set up a Virtual Machine (VM) quickly and access the services without a complex deployment process. For this to work certain, compromises had to be taken:
- The AR Cloud bundle needs to be installed when running the virtual machine for the first time
- High-availability for the services is disabled to limit the required resources
- The observability stack is not installed
The above limitations can be overcome by reconfiguring the infrastructure and services, but it requires additional steps to be taken.
Taking all the above into consideration, the provided image is not suitable for scalable and fault-tolerant deployments in production environments! Instead, it is a means of quickly testing the services and devices.
Download
The images are available on the Magic Leap 2 Developer Portal.
You must be logged in to the Developer Portal for these links to appear. You can log in by clicking the "person" icon in the upper-right corner of the window at the link above.
To download an image the approval of the Software License Agreement is required.
Download the latest version of an image for the runtime environment of your choice. The OVA image supports the majority of the environments, except for MacBooks with Apple Silicon chipsets, in which case the UTM image should be used.
Verifying signatures
To verify a signed artifact, first install Cosign
The general verification command of the downloaded image using its signature is as follows:
cosign verify-blob --key </path/to/public-key> --signature <signature> </path/to/downloaded/image>
The signature of the artifact can be found on Magic Leap 2 Developer Portal in the row corresponding to the downloaded file. The Cosign public key is available for download from Magic Leap AR Cloud.
Requirements
Virtualization Support
Make sure hardware-assisted virtualization is enabled for the host machine's CPU:
- Debian/Ubuntu
- Windows
- MacOS
grep -cw vmx /proc/cpuinfo
The output should be different than 0.
Using Command Prompt
Run Command Prompt as administrator and run:
systeminfo.exe
The Hyper-V Requirements should be shown at the end with all the values set to Yes.
If the Hyper-V Requirements display the message "A hypervisor has been detected. Features required for Hyper-V will not be displayed." instead of the actual requirements, it means a hypervisor is already running on the machine and it will prevent VirtualBox from using hardware-assisted virtualization.
Using PowerShell
Run PowerShell as administrator and run:
Get-ComputerInfo -property "HyperVRequirement*"
The output should be True for all of the properties.
If the output is empty, it means a hypervisor is already running on the machine and it will prevent VirtualBox from using hardware-assisted virtualization.
Verify that VirtualBox is able to use hardware virtualization
Run the following command:
vboxmanage list hostinfo
Make sure that the following parameters are set to yes
:
Processor supports HW virtualization: yes
Processor supports nested HW virtualization: yes
In case one of the parameters is set to no
, disable the hypervisor launch.
The following Windows features will not be available anymore:
- Microsoft-Hyper-V-Hypervisor
- HypervisorPlatform
- Microsoft-Windows-Subsystem-Linux
- VirtualMachinePlatform
It is not possible to run multiple hypervisors at the same time, so disabling the hypervisor launch will prevent you from running WSL or Hyper-V Manager as they require the Hyper-V hypervisor.
Disable the Hypervisor Launch
In case the hypervisor is running, it should be disabled in the boot configuration:
bcdedit /set "{current}" hypervisorlaunchtype Off
The machine will need to be restarted for the changes to take effect.
sysctl machdep.cpu.features | grep -cwi vmx
sysctl kern.hv_support
One of the commands should output 1.
If virtualization is not enabled, follow these steps to enable it:
- Generic Steps
- Windows
- HP
- Dell
- Restart your computer
- Enter BIOS while the computer is booting up
- Find the Virtualization Technology (VTx) setting, e.g. from different versions of BIOS:
- "Security -> System Security"
- "System Configuration -> Device Configuration"
- Enable the setting
- Save changes and boot your OS
Windows Instructions for enabling the Virtualization Technology (VTx).
HP Instructions for enabling the Virtualization Technology (VTx)
DELL Instructions for enabling the Virtualization Technology (VTx)
After enabling the Virtualization Technology (VTx) verify that it is now supported by your OS by re-running the corresponding command.
If the host machine does not support hardware-assisted virtualization, the virtual machine will not be able to run.
Resources
To be able to comfortably use the virtual machine, the following resources are needed for the guest system:
- CPUs: 5
- Memory: 16 GiB
- Disk: 100 GiB
In case of performance issues, the resources can be increased after stopping the VM.
The resources mentioned above only include the virtual machine itself. If you are running it on a local machine, you need more resources for the operating system and other running software.
Firewall
The following ports need to be exposed to use the provided services:
80
- HTTP443
- HTTPS1883
- MQTT8883
- MQTTS
Additionally, a SSH server is configured on the virtual machine and is running on port 22
. The traffic on port 22
is
forwarded to port 2222
when using VirtualBox or UTM.
Depending on the runtime environment, the firewall configuration might differ:
- Configure your local firewall (if you have one) when running on a local machine
- Configure a cloud firewall based on documentation from your cloud provider otherwise
Runtime Environments
The virtual machine image can be run on a variety of local or cloud environments. Choose from the following supported platforms:
Environment | Platform | Image type |
---|---|---|
Local | Linux - VirtualBox | OVA |
Windows - VirtualBox | OVA | |
macOS Intel (x86_64) - VirtualBox | OVA | |
macOS Apple Silicon (arm64) - UTM | UTM | |
Cloud | GCP - Compute Engine | OVA |
AWS - EC2 | OVA |
Local Machine
The virtual machine image can be run on a laptop/desktop computer or a server that is either not virtualized or that supports nested virtualization. This approach might be suitable for developers that need to run AR Cloud locally or when it is required to have the services running inside a private network.
- Debian/Ubuntu
- Windows
- MacOS
Download VirtualBox for Linux or for Debian-based Linux distributions on amd64 CPUs you can install VirtualBox with the following commands:
curl -sSL https://www.virtualbox.org/download/oracle_vbox_2016.asc | sudo gpg --dearmor --yes --output /usr/share/keyrings/oracle-virtualbox-2016.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/oracle-virtualbox-2016.gpg] http://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list
sudo apt update
sudo apt install VirtualBox-7.0
VirtualBox Limitations
When running on Linux/MacOS using VirtualBox from a user account, ports below 1024
will not be forwarded by default
due to lack of permissions.
The VirtualBox binary has the setuid
bit set, so setting capabilities on it will not work. Instead, it provides an
override which allows forwarding the privileged ports.
When running VirtualBox in headless mode, set the following environment variable before starting the virtual machine:
export VBOX_HARD_CAP_NET_BIND_SERVICE=1
When using the graphical interface, add the above line to one of your user profile files (~/.profile
,
~/.bash_profile
or ~/.zprofile
depending on your configuration), so it is set on login. Make sure to log out and log
in again for the changes to take effect.
Alternatively, install socat
and run it from the root
account to forward traffic on these ports to the virtual
machine:
When using an IP address and/or HTTP, only:
sudo socat TCP-LISTEN:80,fork TCP:localhost:8080
When using a domain with a TLS certificate:
sudo socat TCP-LISTEN:443,fork TCP:localhost:8443
Make sure the socat
process is running while you are using the image.
Importing the Appliance
- Start VirtualBox
- Select File > Import Appliance... from the menu
- Select the downloaded OVA file from your disk and click Next
- Uncheck the Import hard drives as VDI option
- Click Finish
- When the appliance has finished loading, select it from the left panel and click on Start
When the virtual machine starts log in by using the credentials provided below, select a deployment option and continue from there.
You can also run the imported virtual machine in headless mode:
vboxmanage startvm arcloud-ova --type=headless
Download VirtualBox for Windows.
Importing the Appliance
- Start VirtualBox
- Select File > Import Appliance... from the menu
- Select the downloaded OVA file from your disk and click Next
- Uncheck the Import hard drives as VDI option
- Click Finish
- When the appliance has finished loading, select it from the left panel and click on Start
When the virtual machine starts log in by using the credentials provided below, select a deployment option and continue from there.
You can also run the imported virtual machine in headless mode:
vboxmanage startvm arcloud-ova --type=headless
Intel Chip - VirtualBox
All newer MacBooks have non-Intel architectures, instead using "Apple Silicon" chipsets (M1 or M2). If you have one of these M1 or M2 processors in your Mac, VirtualBox will not run and you need to follow the instructions for UTM below.
Download VirtualBox for MacOS and Intel CPUs.
VirtualBox Limitations
When running on Linux/MacOS using VirtualBox from a user account, ports below 1024
will not be forwarded by default
due to lack of permissions.
The VirtualBox binary has the setuid
bit set, so setting capabilities on it will not work. Instead, it provides an
override which allows forwarding the privileged ports.
When running VirtualBox in headless mode, set the following environment variable before starting the virtual machine:
export VBOX_HARD_CAP_NET_BIND_SERVICE=1
When using the graphical interface, add the above line to one of your user profile files (~/.profile
,
~/.bash_profile
or ~/.zprofile
depending on your configuration), so it is set on login. Make sure to log out and log
in again for the changes to take effect.
Alternatively, install socat
and run it from the root
account to forward traffic on these ports to the virtual
machine:
When using an IP address and/or HTTP, only:
sudo socat TCP-LISTEN:80,fork TCP:localhost:8080
When using a domain with a TLS certificate:
sudo socat TCP-LISTEN:443,fork TCP:localhost:8443
Make sure the socat
process is running while you are using the image.
Importing the Appliance
- Start VirtualBox
- Select File > Import Appliance... from the menu
- Select the downloaded OVA file from your disk and click Next
- Uncheck the Import hard drives as VDI option
- Click Finish
- When the appliance has finished loading, select it from the left panel and click on Start
When the virtual machine starts log in by using the credentials provided below, select a deployment option and continue from there.
You can also run the imported virtual machine in headless mode:
vboxmanage startvm arcloud-ova --type=headless
Apple Chip - UTM
UTM offers native support for running images on Apple Silicon-based hardware.
Download UTM
Open the
.dmg
image and follow the instructions to install UTM on your systemExtract the downloaded AR Cloud UTM file:
tar xzf arcloud-ova.utm.tgz
Open the extracted AR Cloud UTM file - the UTM app should import it automatically
Click on the Play button to start the virtual machine
When the virtual machine starts log in by using the credentials provided below, select a deployment option and continue from there.
Cloud Providers
In case local machines are not available or the services need to be available publicly, it is also possible to deploy the virtual machine image to the supported cloud providers described below.
- GCP
- AWS
Make sure you have the Google Cloud CLI installed.
Check the GCP documentation or follow the steps below:
Prepare details about your GCP project and user account:
export GCP_PROJECT_ID=my-project-id
export GCP_PROJECT_NUMBER=1234567890
export GCP_USER_EMAIL=me@my-domain.comEnable the Cloud Build API (this creates the Cloud Build service account automatically):
gcloud services enable cloudbuild.googleapis.com --project $GCP_PROJECT_ID
If your user account already has the
roles/owner
role on the project:gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "user:$GCP_USER_EMAIL" \
--role 'roles/storage.objectAdmin'Otherwise, grant the required IAM roles:
to your user account:
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "user:$GCP_USER_EMAIL" \
--role 'roles/storage.admin'
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "user:$GCP_USER_EMAIL" \
--role 'roles/viewer'
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "user:$GCP_USER_EMAIL" \
--role 'roles/resourcemanager.projectIamAdmin'
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "user:$GCP_USER_EMAIL" \
--role 'roles/cloudbuild.builds.editor'to the Cloud Build service account:
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "serviceAccount:$GCP_PROJECT_NUMBER@cloudbuild.gserviceaccount.com" \
--role 'roles/compute.admin'
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "serviceAccount:$GCP_PROJECT_NUMBER@cloudbuild.gserviceaccount.com" \
--role 'roles/iam.serviceAccountUser'
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "serviceAccount:$GCP_PROJECT_NUMBER@cloudbuild.gserviceaccount.com" \
--role 'roles/iam.serviceAccountTokenCreator'to the Compute Engine service account:
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "serviceAccount:$GCP_PROJECT_NUMBER-compute@developer.gserviceaccount.com" \
--role 'roles/compute.storageAdmin'
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member "serviceAccount:$GCP_PROJECT_NUMBER-compute@developer.gserviceaccount.com" \
--role 'roles/storage.objectAdmin'
Create a GCS bucket:
export GCP_BUCKET_NAME=my-bucket
export GCP_BUCKET_LOCATION=us
gcloud storage buckets create gs://$GCP_BUCKET_NAME --project $GCP_PROJECT_ID --location $GCP_BUCKET_LOCATIONUpload the OVA image to the GCS bucket:
export GCP_OVA_IMAGE='arcloud-1.2.3.ova'
gsutil -o GSUtil:parallel_composite_upload_threshold=250M \
-o GSUtil:parallel_composite_upload_component_size=50M \
cp $GCP_OVA_IMAGE gs://$GCP_BUCKET_NAMECreate a new Compute Engine instance based on the imported OVA image:
export GCP_INSTANCE_NAME=my-instance
export GCP_ZONE=us-central1-c
gcloud compute instances import $GCP_INSTANCE_NAME \
--source-uri gs://$GCP_BUCKET_NAME/$GCP_OVA_IMAGE \
--project $GCP_PROJECT_ID \
--zone $GCP_ZONEMake sure the necessary firewall rules are configured.
When importing a new image in the same project, you can start from step 6, but make sure to export all the variables.
Make sure you have the AWS CLI installed.
Check the AWS documentation or follow the steps below:
Prepare details about your AWS account:
export AWS_ACCOUNT_ID=123456789012
The account ID can be obtained by running:
aws sts get-caller-identity --query Account --output text
Create an S3 bucket:
export AWS_BUCKET_NAME=my-bucket
export AWS_BUCKET_REGION=us-east-1
aws s3api create-bucket --bucket $AWS_BUCKET_NAME --region $AWS_BUCKET_REGION --acl privateBucket RegionWhen importing an OVA image the S3 bucket has to be in the same region as the AWS AMI image that is created. Adjust the bucket region to the one where the EC2 instance should be running.
Grant the required permissions:
create a service role with a trust relationship document:
cat >vmimport-trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "vmie.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:Externalid": "vmimport",
"aws:SourceAccount": "$AWS_ACCOUNT_ID"
},
"ArnLike": {
"aws:SourceArn": "arn:aws:vmie:*:$AWS_ACCOUNT_ID:*"
}
}
}
]
}
EOF
aws iam create-role --role-name vmimport \
--assume-role-policy-document "file://vmimport-trust-policy.json"attach a policy to the created role:
cat >vmimport-role-policy.json <<EOF
{
"Version":"2012-10-17",
"Statement":[
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::$AWS_BUCKET_NAME",
"arn:aws:s3:::$AWS_BUCKET_NAME/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
}
]
}
EOF
aws iam put-role-policy --role-name vmimport \
--policy-name vmimport \
--policy-document "file://vmimport-role-policy.json"Multiple BucketsThe role policy above will overwrite any previously attached ones. If you would like to import OVA images in multiple regions, create separate buckets for each region and include them in the
Resource
list above.
Upload the OVA image to the S3 bucket:
export AWS_OVA_IMAGE='arcloud-1.2.3.ova'
aws s3 cp $AWS_OVA_IMAGE s3://$AWS_BUCKET_NAMEImport the OVA image as an AWS AMI image:
cat >ec2-containers.json <<EOF
[
{
"Description": "AR Cloud OVA",
"Format": "ova",
"Url": "s3://$AWS_BUCKET_NAME/$AWS_OVA_IMAGE"
}
]
EOF
aws ec2 import-image --description "arcloud-ova" --disk-containers "file://ec2-containers.json"Note down the import task ID returned by the command.
Monitor the status of the import task until it is in the
completed
status by using the ID from the previous step:aws ec2 describe-import-image-tasks --import-task-ids import-ami-1234567890abcdef0
Find the ID of the imported image (use the date to find the correct image):
aws ec2 describe-images \
--owners $AWS_ACCOUNT_ID \
--filters 'Name=name,Values=import-ami*' \
--query 'Images[*].[ImageId,CreationDate]' \
--output text
export AWS_IMAGE_ID=ami-0abcdef1234567890Generate an updated block device mappings file based on the imported image and modify it to use a Provisioned IOPS SSD volume:
aws ec2 describe-images --image-ids $AWS_IMAGE_ID --query 'Images[0].BlockDeviceMappings' > block-device-mappings.json
sed -ri 's/^(\s+)("VolumeType": )"(.+)"/\1\2"io2",\n\1"Iops": 2000/' block-device-mappings.jsonGNU sedThe above command requires GNU sed to be installed on your system.
Run an EC2 instance using the imported image and the custom volume configuration file:
aws ec2 run-instances \
--image-id $AWS_IMAGE_ID \
--instance-type c5d.2xlarge \
--associate-public-ip-address \
--ebs-optimized \
--block-device-mappings file://block-device-mappings.jsonMake sure the necessary firewall rules are configured.
When importing a new image in the same account, you can start from step 4, but make sure to export all the variables.
Credentials
The virtual machine includes a dedicated arcloud
user with a password set to changeme
. The password is set to expire
and needs to be changed during the first login.
Password access should be disabled entirely for all publicly accessible deployments (e.g. on GCP or AWS). Key-based authentication should be used instead.
To do this, create keys for your user accounts and modify /etc/ssh/sshd_config
to include:
PasswordAuthentication no
Accessing the Running Virtual Machine
To access the virtual machine, the IP address of your machine is needed.
The IP address might differ depending on the target platform:
- for local machines - the loopback interface address (
127.0.0.1
) or the address of another network interface on the machine (e.g.192.168.1.101
) - for cloud providers - the configured/assigned public IP of the instance
To list the available IPv4 addresses on your machine/instance, try the following command:
- Debian/Ubuntu
- Windows
- MacOS
ip -br a | awk '/UP / { print $1, $3 }'
ipconfig /all | findstr /i "ipv4"
scutil --nwi | awk '/flags/ { i=$1; next } /address/ { print i, $3 }'
Verify that your Magic Leap device has an IP address assigned from the same subnet as your machine or the device is able to access one of the IP addresses from the list above (your router allows connectivity between different subnets).
Apart from using the graphical interface directly, you can also access the machine using SSH (this makes it easier to copy the generated credentials):
to a local virtual machine:
ssh arcloud@<ip-address> -p 2222
e.g.:
ssh arcloud@192.168.1.101 -p 2222
to a cloud instance:
ssh arcloud@<ip-address>
e.g.:
ssh arcloud@1.2.3.4
Deployment Options
The remaining commands in this guide will be executed from the instantiated virtual machine, all from the home (~/
)
directory.
The image can be configured to use an IP address or the default arcloud-ova.local
domain with HTTP only.
Alternatively, a custom domain can be used which will either trigger the creation of a TLS certificate or will use one
that is already available.
Simple Deployment - HTTP
With this approach we limit the configuration needed to access the services, but with the cost of lowered security.
This type of deployment should only be used inside private networks without access from the Internet. All the traffic between clients and the AR Cloud instance will be unencrypted and can be easily accessed by an attacker.
Option 1. Use an IP Address Only
If you want to be able to connect to the machine from other devices, the services need to be configured to use an IP address directly.
Run the set_ip.sh
script from inside the virtual machine and provide your IP address as argument:
./set_ip.sh --accept-sla <ip-address>
e.g. :
./set_ip.sh --accept-sla 192.168.1.101
Make sure to use a static IP address - one that is not assigned dynamically to your host machine. Otherwise, the above command has to be run and the device registered again every time the IP address changes.
Option 2. Use the Default Domain and Configure Local DNS Overrides
In case you want to use the default domain, run the set_default.sh
script from inside the virtual machine:
./set_default.sh --accept-sla
To be able to access the services, the IP address of the machine where the image is deployed can be set as the target of the pre-configured domain.
The requirement is that the IP should be accessible from the machine in a browser.
Add the following to the bottom of your /etc/hosts
file (may require sudo
):
# arcloud-ova
<IP-address> arcloud-ova.local
<IP-address> smtp.arcloud-ova.local
This will only make the services available on the devices that have the override configured.
Advanced Deployment - HTTPS
This approach requires a custom domain and additional configuration in the DNS zone and on the firewall, but is a lot more secure compared to the previous options.
This is the recommended approach for all publicly accessible deployments (e.g. on GCP or AWS).
Option 1. Use a Custom Domain and Automatically Generate the Certificate
This allows the services to use a custom domain and issue a TLS certificate automatically by using cert-manager with an HTTP challenge.
Point your custom domain to the IP address where the virtual machine is available and make sure that all the ports mentioned above are accessible
Run the
set_domain.sh
script from inside the virtual machine and provide your domain as argument:./set_domain.sh --accept-sla <domain>
e.g. :
./set_domain.sh --accept-sla my.domain.com
Option 2. Use a Custom Domain and an External Load Balancer with a Certificate
The custom domain is configured on an external Load Balancer that already has a certificate attached for that domain. Traffic from the Internet to the load balancer is encrypted using TLS with a certificate issued by the cloud provider and the load balancer forwards the traffic to AR Cloud using TLS with a self-signed certificate issued by cert-manager.
Follow the instruction on how to configure an external load balancer
Set the variables needed for the next commands inside the virtual machine:
. ./constants.sh
Modify the Istio configuration to include headers from the external load balancer:
cat $BUNDLE_DIR/setup/istio.yaml | envsubst | istioctl install -y --set meshConfig.defaultConfig.gatewayTopology.numTrustedProxies=2 -f -
kubectl rollout restart -n istio-system deployment/istio-ingressgatewayModify the issuer to issue a self-signed certificate:
yq -i 'del(.spec.acme) | .spec.selfSigned={}' $BUNDLE_DIR/setup/issuer.yaml
Run the
set_domain.sh
script and provide your domain as argument:./set_domain.sh --accept-sla <domain>
e.g. :
./set_domain.sh --accept-sla my.domain.com
Verify Installation
Once the AR Cloud deployment completes, the deployment script will print out the cluster information similar to:
------------------------------
Cluster Installation (arcloud)
------------------------------
Enterprise Web:
--------------
http://<DOMAIN>/
Username: aradmin
Password: <base64-encoded string>
Keycloak:
---------
http://<DOMAIN>/auth/
Username: admin
Password: <base64-encoded string>
MinIO:
------
kubectl -n arcloud port-forward svc/minio 8082:81
http://127.0.0.1:8082/
Username: <base64-encoded string>
Password: <base64-encoded string>
PostgreSQL:
------
kubectl -n arcloud port-forward svc/postgresql 5432:5432
psql -h 127.0.0.1 -p 5432 -U postgres -W
Username: postgres
Password: <base64-encoded string>
Network:
--------
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-system istio-ingressgateway LoadBalancer <IPv4> <IPv4> 80:31456/TCP,443:32737/TCP,15021:31254/TCP,1883:30231/TCP,8883:32740/TCP 1d
Log in to the Enterprise Console
- Open the Enterprise Console URL (
http://<DOMAIN>/
) in a browser - Enter the credentials for Enterprise Web provided by the deployment script
- Verify the successful login
Register an ML2 device
Web console
Perform the following steps using the web-based console:
- Log in to the Enterprise Console
- Select Devices from the top menu
- Click Configure to display a QR code unique for your AR Cloud instance
ML2 device
Perform the following steps from within your ML2 device:
- Open the Settings app
- Select Perception
- Select the QR code icon next to AR Cloud
- Scan the QR code displayed in the web console
- Wait for the process to finish and click on the Login button
- Enter the user account credentials in the ML2 device web browser
The Enterprise Console should show the registered device on the list.
Display Cluster Information
If you ever need to display the cluster information again, run the following script:
./show_info.sh --accept-sla
Preserving the Virtual Machine State
The virtual machine is configured to preserve all the data created, changes made and configuration set during its usage (e.g. registered devices, generated maps).
For this to work, it needs to be powered off safely as is required for all physical machines. To do it, connect to the virtual machine using SSH and run the following command in the terminal:
sudo poweroff
It might take around 2 minutes to stop all the services and and turn off the virtual machine completely.
If you shut the virtual machine off from VirtualBox, UTM or the cloud vendor interface or will not wait until it closes, your data might be lost.
Troubleshooting
Status Page
Once deployed, you can use the Enterprise Console to check the
status of each AR Cloud service. This page can be accessed in the navigation menu link "AR Cloud Status" or through the
following URL path:
<domain or IP address>/ar-cloud-status
e.g.: http://192.168.1.101/ar-cloud-status
An external health check can be configured to monitor AR Cloud services with the following endpoints:
Service | URL | Response |
---|---|---|
Health Check (General) | /api/identity/v1/healthcheck | {"status":"ok"} |
Mapping | /api/mapping/v1/healthz | {"status":"up","version":"<version>"} |
Session Manager | /session-manager/v1/healthz | {"status":"up","version":"<version>"} |
Streaming | /streaming/v1/healthz | {"status":"up","version":"<version>"} |
Spatial Anchors | /spatial-anchors/v1/healthz | {"status":"up","version":"<version>"} |
User Identity | /identity/v1/healthz | {"status":"up","version":"<version>"} |
Device Gateway | /device-gateway/v1/healthz | {"status":"up","version":"<version>"} |
Events | /events/v1/healthz | {"status":"up","version":"<version>"} |
Unable to complete the installation of the cluster services
Depending on the service that is failing to install, the cause of the issue might be different:
postgresl
,minio
,nats
are the first services being installed, are all Stateful Sets and require persistent volumes:- a problem with the storage provisioner (the persistent volumes are not created) - a local path provisioner is used by default, so the VM might not have enough space to create the volumes
- there is insufficient space available in the volumes - resize the volumes
keycloak
is the first service that requires database access - reinitialize the databasemapping
andstreaming
both use big container images and require significant resources:- virtualization is not enabled correctly - enable virtualization
- unable to extract the images within the default timeout of 5 minutes - use a faster disk supporting at least 2k IOPS
- insufficient resources to start the containers - increase the number of CPUs or size of the memory
Services are unable to start, because one of the volumes is full
If one of the Stateful Sets using persistent volumes (nats
, minio
, postgresql
) is unable to run correctly, it might mean
the volume is full and needs to be resized.
Using minio
as an example, follow the steps to resize the data-minio-0
persistent volume claim:
Allow volume resizing for the default storage class:
kubectl patch sc local-path -p '{"allowVolumeExpansion": true}'
Resize the
minio
volume:kubectl patch pvc data-minio-0 -n arcloud -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'
Track the progress of the resize operation (it will not succeed if there are no nodes available in the cluster):
kubectl get events -n arcloud --field-selector involvedObject.name=data-minio-0 -w
Verify that the new size is visible:
kubectl get pvc -n arcloud data-minio-0
Make sure the pod is running:
kubectl get pod -n arcloud minio-0
Check the disk usage of the volume on a running pod:
kubectl exec -n arcloud minio-0 -c minio -- df -h /data
The installation of keycloak
fails
This might happen when the database deployment was reinstalled, but the database itself has not been updated. Usually
this can be detected by the installation failing when installing keycloak
. It is caused by the passwords in the secrets
not matching the ones for the users in the database. The database needs to be reinitialized to resolve the problem.
This will remove all the data in the database!
In case the problem occurred during the initial installation, it is okay to proceed. Otherwise, please contact Magic Leap support to make sure none of your data is lost.
Uninstall
postgresql
:helm uninstall -n arcloud postgresql
Delete the persistent volume for the database:
kubectl delete pvc -n arcloud data-postgresql-0
Run the installation again using the process described above.
Problems accessing the Enterprise Console
Some content might have been cached in your web browser.
Open the developer console and disable cache (that way everything gets refreshed):
- Chrome (Disable cache):
- Firefox (Disable HTTP Cache): https://firefox-source-docs.mozilla.org/devtools-user/settings/index.html
Alternatively, use a guest/separate user profile:
- Chrome: https://support.google.com/chrome/answer/6130773
- Firefox: https://support.mozilla.org/en-US/kb/profile-manager-create-remove-switch-firefox-profiles
Helpful commands
K9s
K9s provides a terminal UI to interact with your Kubernetes clusters.
In case you want to easily manage the cluster resources, install K9s:
- Debian/Ubuntu
- MacOS
k9s_version=$(curl -sSLH 'Accept: application/json' https://github.com/derailed/k9s/releases/latest | jq -r .tag_name)
k9s_archive=k9s_Linux_amd64.tar.gz
curl -sSLO https://github.com/derailed/k9s/releases/download/$k9s_version/$k9s_archive
sudo tar Cxzf /usr/local/bin $k9s_archive k9s
brew install derailed/k9s/k9s
Details about using K9s are available in the official docs.
Status of the cluster and services
List of pods including their status, restart count, IP address and assigned node:
kubectl get pods -n arcloud -o wide
List of pods that are failing:
kubectl get pods -n arcloud --no-headers | grep -Ei 'error|crashloopbackoff'
List of pods including the ready state, type of owner resources and container termination reasons:
kubectl get pods -n arcloud -o 'custom-columns=NAME:.metadata.name,READY:.status.conditions[?(@.type=="Ready")].status,OWNERS:.metadata.ownerReferences[*].kind,TERMINATION REASONS:.status.containerStatuses[*].state.terminated.reason'
Show details about a pod:
kubectl describe pod -n arcloud name-of-the-pod
e.g. for the first instance of the streaming service:
kubectl describe pod -n arcloud streaming-0
List of all events for the arcloud
namespace:
kubectl get events -n arcloud
List of events of the specified type (only warnings or regular events):
kubectl get events -n arcloud --field-selector type=Warning
kubectl get events -n arcloud --field-selector type=Normal
List of events for the specified resource kind:
kubectl get events -n arcloud --field-selector involvedObject.kind=Pod
kubectl get events -n arcloud --field-selector involvedObject.kind=Job
List of events for the specified resource name (e.g. for a pod that is failing):
kubectl get events -n arcloud --field-selector involvedObject.name=some-resource-name
e.g. for the first instance of the streaming service:
kubectl get events -n arcloud --field-selector involvedObject.name=streaming-0
Logs from the specified container of one of the AR Cloud services:
kubectl logs -n arcloud -l app\.kubernetes\.io/name=device-gateway -c device-gateway
kubectl logs -n arcloud -l app\.kubernetes\.io/name=enterprise-console-web -c enterprise-console-web
kubectl logs -n arcloud -l app\.kubernetes\.io/name=events -c events
kubectl logs -n arcloud -l app\.kubernetes\.io/name=identity-backend -c identity-backend
kubectl logs -n arcloud -l app\.kubernetes\.io/name=keycloak -c keycloak
kubectl logs -n arcloud -l app\.kubernetes\.io/name=minio -c minio
kubectl logs -n arcloud -l app\.kubernetes\.io/name=nats -c nats
kubectl logs -n arcloud -l app\.kubernetes\.io/name=session-manager -c session-manager
kubectl logs -n arcloud -l app\.kubernetes\.io/name=mapping -c mapping
kubectl logs -n arcloud -l app\.kubernetes\.io/name=mapping -l app\.kubernetes\.io/component=worker -c mapping-worker
kubectl logs -n arcloud -l app\.kubernetes\.io/name=streaming -c streaming
kubectl logs -n arcloud -l app\.kubernetes\.io/name=space-proxy -c space-proxy
kubectl logs -n arcloud -l app\.kubernetes\.io/name=spatial-anchors -c spatial-anchors
Logs from the Istio ingress gateway (last 100 for each instance or follow the logs):
kubectl logs -n istio-system -l app=istio-ingressgateway --tail 100
kubectl logs -n istio-system -l app=istio-ingressgateway -f
Resource usage of the cluster nodes:
kubectl top nodes
If the usage of the CPU or memory is reaching 100%, the cluster has to be resized by either using bigger nodes or increasing their number.
Disk usage usage of persistent volumes:
kubectl exec -n arcloud minio-0 -c minio -- df -h /data
kubectl exec -n arcloud nats-0 -c nats -- df -h /data
kubectl exec -n arcloud postgresql-0 -c postgresql -- df -h /data
If the usage of one of the volumes is reaching 100%, resize it.
Disk usage inside the VM
Total disk usage for the whole VM:
df -h /
If the disk usage is reaching 100%, stop the VM and resize its disk.
Disk usage of each root directory:
sudo du -hd1 /
Disk usage of the cluster directories:
sudo du -hd1 /var/lib/rancher/k3s
Finding out what is wrong
Please follow the steps below to find the cause of issues with the cluster or AR Cloud services:
Create a new directory for the output of the subsequent commands:
mkdir output
Check events for all namespaces of type
Warning
:kubectl get events -A --field-selector type=Warning | tee output/events.log
Describe each pod that is listed above, e.g.:
kubectl describe pod -n arcloud streaming-0 | tee output/streaming-pod-details.log
kubectl describe pod -n istio-system istio-ingressgateway-b8cc646d4-rjdkk | tee output/istio-pod-details.logCheck the logs for each failing pod using the service name (check the command examples above), e.g.:
kubectl logs -n arcloud -l app\.kubernetes\.io/name=mapping -c mapping | output/mapping.log
kubectl logs -n istio-system -l app=istio-ingressgateway --tail 1000 | tee output/istio.logCreate an archive with all the results:
tar czf results.tgz output/
Check the suggestions above for solving the most common issues.
Otherwise, share the details with Customer Care using one of the methods listed below.
Support
In case you need help, please: