Installation¶
The standard installation uses Helm 3 and the robusta-cli, but alternative methods are described below.
Configuring and installing Robusta takes 97.68 seconds on a 10 node cluster 1. You can also install on Colima or KIND. Uninstalling takes one command, so go ahead and try!
Have questions?
Ask us on Slack or open a GitHub issue
We will now configure Robusta in your cluster. For this we need to install Robusta, and also connect at least one destination ("sink"), and at least one source ("triggers").

Creating the config file¶
To configure robusta, the Robusta CLI is required. Choose one of the installation methods below.
Installation Methods
pip install -U robusta-cli --no-cache
Common Errors
Python 3.7 or higher is required
If you are using a system such as macOS that includes both Python 2 and Python 3, run pip3 instead of pip.
Errors about tiller mean you are running Helm 2, not Helm 3
For Windows please use WSL.
Download robusta script and give it executable permissions:
curl -fsSL -o robusta https://docs.robusta.dev/master/_static/robusta
chmod +x robusta
Use the script, for example:
./robusta version
Common Errors
Docker daemon is required.
Generate a Robusta configuration. This will setup Slack and other integrations. We highly recommend enabling the cloud UI so you can see all features in action.
If you’d like to send Robusta messages to additional destinations (Discord, Telegram etc.). See Sink configuration.
robusta gen-config
Robusta on Minikube
We don't recommend installing Robusta on Minikube because of a recent issue with minikube. More details here.
Robusta not in PATH
if you get "command not found: robusta
", see Common errors
Save
generated_values.yaml
, somewhere safe. This is your Helmvalues.yaml
file.
Installing on multiple clusters
Use the same generated_values.yaml
for all your clusters (dev, prod, etc..). There's no need to run gen-config again.
Standard Installation¶
Add Robusta's chart repository:
helm repo add robusta https://robusta-charts.storage.googleapis.com && helm repo update
Specify your cluster's name and install Robusta using Helm. On some clusters this can take a while 2, so don't panic if it appears stuck:
helm install robusta robusta/robusta -f ./generated_values.yaml \
--set clusterName=<YOUR_CLUSTER_NAME>
helm install robusta robusta/robusta -f ./generated_values.yaml \
--set clusterName=<YOUR_CLUSTER_NAME> \
--set isSmallCluster=true
Test clusters tend to have fewer resources. To lower the resource requests of Robusta,
--set isSmallCluster=true
is included.
helm install robusta robusta/robusta -f ./generated_values.yaml \
--set clusterName=<YOUR_CLUSTER_NAME> \
--set kube-prometheus-stack.coreDns.enabled=false \
--set kube-prometheus-stack.kubeControllerManager.enabled=false \
--set kube-prometheus-stack.kubeDns.enabled=false \
--set kube-prometheus-stack.kubeEtcd.enabled=false \
--set kube-prometheus-stack.kubeProxy.enabled=false \
--set kube-prometheus-stack.kubeScheduler.enabled=false \
--set kube-prometheus-stack.nodeExporter.enabled=false \
--set kube-prometheus-stack.prometheusOperator.kubeletService.enabled=false
With GKE Autopilot restrictions, some components must be disabled when installing Robusta bundled with kube-prometheus-stack.
Note
Sensitive configuration values can be stored in Kubernetes secrets. See Configuration secrets guide.
Verify the two Robusta pods and running with no errors in the logs:
kubectl get pods -A | grep robusta
robusta logs
Seeing Robusta in action¶
By default, Robusta sends notifications when Kubernetes pods crash.
Create a crashing pod:
kubectl apply -f https://gist.githubusercontent.com/robusta-lab/283609047306dc1f05cf59806ade30b6/raw
Verify that the pod is actually crashing:
$ kubectl get pods -A
NAME READY STATUS RESTARTS AGE
crashpod-64d8fbfd-s2dvn 0/1 CrashLoopBackOff 1 7s
Once the pod has reached two restarts, check your Slack channel for a message about the crashing pod.
Open the Robusta UI (if you enabled it) and look for the same message there.
Clean up the crashing pod:
kubectl delete deployment crashpod
Installing a second cluster¶
When installing a second cluster on the same account, there's no need to run robusta gen-config
again.
Using your existing generated_values.yaml and the new clusterName run:
helm install robusta robusta/robusta -f ./generated_values.yaml --set clusterName=<YOUR_CLUSTER_NAME> # --set isSmallCluster=true
Where is my generated_values.yaml?
If you have lost your generated_values.yaml
file, you can extract it from any cluster running Robusta.
In that case, clusterName
and isSmallCluster
may be already in generated_values.yaml
. Make sure to remove them before installing on the new cluster.
helm get values -o yaml robusta | grep -v clusterName: | grep -v isSmallCluster: > generated_values.yaml
Next Steps¶
Define your first automation
Add your first Prometheus enrichment
Footnotes
- 1
See this great video on YouTube where a community member installs Robusta with a stopwatch. If you beat his time by more than 30% and document it, we'll send you a Robusta mug too.
- 2
AWS EKS, we're looking at you!
Additional Installation Methods¶
Installing with GitOps
Follow the instructions above to generate generated_values.yaml
. Commit it to git and use ArgoCD or
your favorite tool to install.
Installing without the Robusta CLI
Using the cli is totally optional. If you prefer, you can skip the CLI and fetch the default Helm values from the helm chart:
helm repo add robusta https://robusta-charts.storage.googleapis.com && helm repo update
helm show values robusta/robusta
Most values are documented in the Configuration Guide
Do not use helm/robusta/values.yaml
in the GitHub repo. It has some empty placeholders which are replaced during
our release process.
Installing in a different namespace
Create a namespace robusta
and install robusta in the new namespace using:
helm install robusta robusta/robusta -f ./generated_values.yaml -n robusta --create-namespace
Verify that Robusta installed two deployments in the robusta
namespace:
kubectl get pods -n robusta
Installing on OpenShift
You will need to run one additional command:
oc adm policy add-scc-to-user anyuid -z robusta-runner-service-account
It's possible to reduce the permissions more. Please feel free to open a PR suggesting something more minimal