Victoria MetricsΒΆ
This guide walks you through configuring Victoria Metrics with Robusta.
You will need to configure two integrations: one to send alerts to Robusta and another to let Robusta query metrics and create silences.
Send Alerts to RobustaΒΆ
Add the following to your Victoria Metrics Alertmanager configuration (e.g., Helm values file or VMAlertmanagerConfig CRD):
receivers:
- name: 'robusta'
webhook_configs:
- url: 'http://<ROBUSTA-HELM-RELEASE-NAME>-runner.<NAMESPACE>.svc.cluster.local/api/alerts'
send_resolved: true # (3)
route: # (1)
routes:
- receiver: 'robusta'
group_by: [ '...' ]
group_wait: 1s
group_interval: 1s
matchers:
- severity =~ ".*"
repeat_interval: 4h
continue: true # (2)
Put Robusta's route as the first route, to guarantee it receives alerts. If you can't do so, you must guarantee all previous routes has
continue: true
set.Keep sending alerts to receivers defined after Robusta.
Important, so Robusta knows when alerts are resolved.
Verify it WorksΒΆ
Send a dummy alert to AlertManager:
If you have the Robusta CLI installed, you can send a test alert using the following command:
robusta demo-alert
In the Robusta UI, go to the "Clusters" tab, choose the right cluster and click "Simulate Alert".

Then
Check Send alert with no resource.
Provide a name for the alert in the Alert name (identifier) field (e.g., "Testing Prod AlertManager").
Select Alert Manager under the "Send alert to" section.
Click the Simulate Alert button to send the test alert.

If everything is setup properly, this alert will reach Robusta. It will show up in the Robusta UI, Slack, and other configured sinks.
Note
It might take a few minutes for the alert to arrive due to AlertManager's group_wait and group_interval settings. More info here.
I configured AlertManager, but I'm not receiving alerts?
Try sending a demo-alert as described above. If nothing arrives, check:
AlertManager UI status page - verify that your config was picked up
kube-prometheus-operator logs (if relevant)
AlertManager logs
Reach out on Slack for assistance.
Robusta isn't mapping alerts to Kubernetes resources
Robusta enriches alerts with Kubernetes and log data using Prometheus labels for mapping. Standard label names are used by default. If your setup differs, you can customize this mapping to fit your environment.
Configure Metrics QueryingΒΆ
Robusta can query metrics and create silences using Victoria Metrics. If both are in the same Kubernetes cluster, Robusta can auto-detect the Victoria Metrics service. To verify, go to the "Apps" tab in Robusta, select an application, and check for usage graphs.
If auto-detection fails you must add the prometheus_url
parameter and update Robusta.
globalConfig: # this line should already exist
# add the lines below
alertmanager_url: "http://<VM_ALERT_MANAGER_SERVICE_NAME>.<NAMESPACE>.svc.cluster.local:9093" # Example:"http://vmalertmanager-victoria-metrics-vm.default.svc.cluster.local:9093/"
prometheus_url: "http://VM_Metrics_SERVICE_NAME.NAMESPACE.svc.cluster.local:8429" # Example:"http://vmsingle-vmks-victoria-metrics-k8s-stack.default.svc.cluster.local:8429"
# Add any labels that are relevant to the specific cluster (optional)
# prometheus_additional_labels:
# cluster: 'CLUSTER_NAME_HERE'
# Additional query string parameters to be appended to the Prometheus connection URL (optional)
# prometheus_url_query_string: "demo-query=example-data&another-query=value"
# Create alert silencing when using Grafana alerts (optional)
# grafana_api_key: <YOUR GRAFANA EDITOR API KEY> # (1)
# alertmanager_flavor: grafana
# If using a multi-tenant prometheus or alertmanager, pass the org id to all queries
# prometheus_additional_headers:
# X-Scope-OrgID: <org id>
# alertmanager_additional_headers:
# X-Scope-OrgID: <org id>
This is necessary for Robusta to create silences when using Grafana Alerts, because of minor API differences in the AlertManager embedded in Grafana.
Optional SettingsΒΆ
Prometheus flags checks
Robusta utilizes the flags API to retrieve data from Prometheus-style metric stores. However, some platforms like Google Managed Prometheus, Azure Managed Prometheus etc, do not implement the flags API.
You can disable the Prometheus flags API check by setting the following option to false
.
globalConfig:
check_prometheus_flags: true/false