External PrometheusΒΆ

Follow this guide to connect Robusta to a central Prometheus (e.g. Thanos/Mimir), running outside the cluster monitored by Robusta.

Note

Using Grafana Cloud? For Grafana Cloud with Mimir, see the dedicated guide: Grafana Cloud (Mimir)

You will need to configure two integrations: one to send alerts to Robusta and another to let Robusta query metrics and create silences.

Send Alerts to RobustaΒΆ

This integration lets your central Prometheus send alerts to Robusta, as if they were in the same cluster:

  1. Verify that all alerts contain a label named cluster_name or cluster, matching the cluster_name defined in Robusta's configuration. This is necessary to identify which robusta-runner should receive alerts.

  2. Edit the configuration for your centralized AlertManager:

alertmanager.yaml

receivers:
  - name: 'robusta'
    webhook_configs:
      - url: 'https://api.robusta.dev/integrations/generic/alertmanager'
        http_config:
          authorization:
            # Replace <TOKEN> with a string in the format `<ACCOUNT_ID> <SIGNING_KEY>`
            credentials: <TOKEN>
        send_resolved: true # (3)

route: # (1)
  routes:
  - receiver: 'robusta'
    group_by: [ '...' ]
    group_wait: 1s
    group_interval: 1s
    matchers:
      - severity =~ ".*"
    repeat_interval: 4h
    continue: true # (2)
  1. Make sure the Robusta route is the first route defined. If it isn't the first route, it might not receive alerts. When a route is matched, the alert will not be sent to following routes, unless the route is configured with continue: true.

  2. Ensures that alerts continue to be sent even after a match is found

  3. Enables sending resolved alerts to Robusta

Verify it WorksΒΆ

Send a dummy alert to AlertManager:

If you have the Robusta CLI installed, you can send a test alert using the following command:

robusta demo-alert

In the Robusta UI, go to the "Clusters" tab, choose the right cluster and click "Simulate Alert".

Choose the cluster

Then

  1. Check Send alert with no resource.

  2. Provide a name for the alert in the Alert name (identifier) field (e.g., "Testing Prod AlertManager").

  3. Select Alert Manager under the "Send alert to" section.

  4. Click the Simulate Alert button to send the test alert.

Send Test Alert

If everything is setup properly, this alert will reach Robusta. It will show up in the Robusta UI, Slack, and other configured sinks.

Note

It might take a few minutes for the alert to arrive due to AlertManager's group_wait and group_interval settings. More info here.

I configured AlertManager, but I'm not receiving alerts?

Try sending a demo-alert as described above. If nothing arrives, check:

  1. AlertManager UI status page - verify that your config was picked up

  2. kube-prometheus-operator logs (if relevant)

  3. AlertManager logs

Reach out on Slack for assistance.

Robusta isn't mapping alerts to Kubernetes resources

Robusta enriches alerts with Kubernetes and log data using Prometheus labels for mapping. Standard label names are used by default. If your setup differs, you can customize this mapping to fit your environment.

Configure Metric QueryingΒΆ

To enable Robusta to pull metrics and create silences, you need to configure Prometheus and AlertManager URLs.

See Prometheus and metrics configuration for detailed instructions.

Note

Robusta will attempt to auto-detect Prometheus and AlertManager URLs in your cluster. Manual configuration is only needed if auto-detection fails.

Filtering Prometheus Queries by ClusterΒΆ

If the same external Prometheus is used for many clusters, you will want to add a cluster name to all queries.

You can do so with the prometheus_url_query_string parameter, shown below:

globalConfig:
  # Additional query string parameters to be appended to the Prometheus connection URL (optional)
  prometheus_url_query_string: "cluster=prod1&x=y"