Monitor Redpanda Cloud

You can configure monitoring on your Redpanda Dedicated or BYOC cluster to maintain system health and optimize performance. Redpanda Cloud exports metrics for all brokers and Kafka Connect workers from a single Prometheus endpoint. This endpoint can be found on the Overview page for your cluster, under How to connect.

You can monitor Redpanda with Prometheus or with any other monitoring and alerting tool, such as Datadog, New Relic, Elastic Cloud, Google, or Azure.

To maximize performance, Redpanda exports some metrics only when the underlying feature is in use. For example, a metric for consumer groups, redpanda_kafka_consumer_group_committed_offset, is only exported when groups are registered.

Configure Prometheus

To monitor a Dedicated or BYOC cluster in Prometheus:

  1. On the Redpanda Cloud Overview page for your cluster, under How to connect, click the Prometheus tab.

  2. Click the copy icon for Prometheus YAML to copy the contents to your clipboard.

    How to connect screenshot

    The YAML contains the Prometheus scrape target configuration, as well as authentication, for the cluster.

    - job_name: redpandaCloud-sample
      static_configs:
        - targets:
            - console-<id>.<identifier>.fmc.cloud.redpanda.com
      metrics_path: /api/cloud/prometheus/public_metrics
      basic_auth:
        username: prometheus
        password: "<prom_pass>"
      scheme: https
  3. Save this information to Prometheus, and observe in Prometheus that metrics from Redpanda endpoints are scraped.

Configure Datadog

To monitor a Dedicated or BYOC cluster in Datadog:

  1. On the Redpanda Cloud Overview page for your cluster, under How to connect, click the Prometheus tab.

  2. Click the copy icon for Prometheus YAML to find scrape target configuration, as well as authentication, for the cluster.

  3. In Datadog, define the openmetrics_endpoint URL for that monitored cluster. The integration configuration should look similar to the following:

    instances:
      # The endpoint to collect metrics from.
      - openmetrics_endpoint: https://console-<id>.<identifier>.fmc.cloud.redpanda.com/api/cloud/prometheus/public_metrics
        use_openmetrics: true
        collect_counters_with_distributions: true
    
        auth_type: basic
        username: prom_user
        password: prom_pass
  4. Restart the Datadog agent.

Because the Prometheus endpoint in Redpanda Cloud aggregates metrics for all cluster services, only a single Datadog agent is required. The agent must run in a container in your own container infrastructure. Redpanda does not support launching this container inside a Dedicated or BYOC Kubernetes cluster. For more information, see the Datadog documentation and Redpanda Datadog integration.

Use Redpanda monitoring examples

For hands-on learning, Redpanda provides a repository with examples of monitoring Redpanda with Prometheus and Grafana: redpanda-data/observability.

Example Redpanda Ops Dashboard^

It includes example Grafana dashboards and a sandbox environment in which you launch a Dockerized Redpanda cluster and create a custom workload to monitor with dashboards.

Monitor for health and performance

This section provides guidelines and example queries using Redpanda’s public metrics to optimize your system’s performance and monitor its health.

To help detect and mitigate anomalous system behaviors, capture baseline metrics of your healthy system at different stages (at start-up, under high load, in steady state) so you can set thresholds and alerts according to those baselines.

For counter type metrics, a broker restart causes the count to reset to zero in tools like Prometheus and Grafana. Redpanda recommends wrapping counter metrics in a rate query to account for broker restarts, for example:

rate(redpanda_kafka_records_produced_total[5m])

Redpanda architecture

Understanding the unique aspects of Redpanda’s architecture and data path can improve your performance, debugging, and tuning skills:

  • Redpanda replicates partitions across brokers in a cluster using Raft, where each partition is a Raft consensus group. A message written from the Kafka API flows down to the Raft implementation layer that eventually directs it to a broker to be stored. Metrics about the Raft layer can reveal the health of partitions and data flowing within Redpanda.

  • Redpanda is designed with a thread-per-core model that it implements with the Seastar library. With each application thread pinned to a CPU core, when observing or analyzing the behavior of a specific application, monitor the relevant metrics with the label for the specific shard, if available.

Infrastructure resources

The underlying infrastructure of your system should have sufficient margins to handle peaks in processing, storage, and I/O loads. Monitor infrastructure health with the following queries.

CPU usage

For the total CPU uptime, monitor redpanda_uptime_seconds_total. Monitoring its rate of change with the following query can help detect unexpected dips in uptime:

rate(redpanda_uptime_seconds_total[5m])

For the total CPU busy (non-idle) time, monitor redpanda_cpu_busy_seconds_total.

To detect unexpected idling, you can query the rate of change as a percentage of the shard that is in use at a given point in time.

rate(redpanda_cpu_busy_seconds_total[5m])

While CPU utilization at the host-level might appear high (for example, 99-100% utilization) when I/O events like message arrival occur, the actual Redpanda process utilization is likely low. System-level metrics such as those provided by the top command can be misleading.

This high host-level CPU utilization happens because Redpanda uses Seastar, which runs event loops on every core (also referred to as a reactor), constantly polling for the next task. This process never blocks and will increment clock ticks. It doesn’t necessarily mean that Redpanda is busy.

Use redpanda_cpu_busy_seconds_total to monitor the actual Redpanda CPU utilization. When it indicates close to 100% utilization over a given period of time, make sure to also monitor produce and consume latency as they may then start to increase as a result of resources becoming overburdened.

Memory allocated

To monitor the percentage of memory allocated, use a formula with redpanda_memory_allocated_memory and redpanda_memory_free_memory:

sum(redpanda_memory_allocated_memory) / (sum(redpanda_memory_free_memory) + sum(redpanda_memory_allocated_memory))

Disk used

To monitor the percentage of disk consumed, use a formula with redpanda_storage_disk_free_bytes and redpanda_storage_disk_total_bytes:

1 - (sum(redpanda_storage_disk_free_bytes) / sum(redpanda_storage_disk_total_bytes))

Also monitor redpanda_storage_disk_free_space_alert for an alert when available disk space is low or degraded.

IOPS

For read and write I/O operations per second (IOPS), monitor the redpanda_io_queue_total_read_ops and redpanda_io_queue_total_write_ops counters:

rate(redpanda_io_queue_total_read_ops[5m]),
rate(redpanda_io_queue_total_write_ops[5m])

Throughput

While maximizing the rate of messages moving from producers to brokers then to consumers depends on tuning each of those components, the total throughput of all topics provides a system-level metric to monitor. When you observe abnormal, unhealthy spikes or dips in producer or consumer throughput, look for correlation with changes in the number of active connections (redpanda_rpc_active_connections) and logged errors to drill down to the root cause.

The total throughput of a cluster can be measured by the producer and consumer rates across all topics.

To observe the total producer and consumer rates of a cluster, monitor redpanda_kafka_request_bytes_total with the produce and consume labels, respectively.

Producer throughput

For the produce rate, create a query to get the produce rate across all topics:

sum(rate(redpanda_kafka_request_bytes_total{redpanda_request="produce"} [5m] )) by (redpanda_request)

Consumer throughput

For the consume rate, create a query to get the total consume rate across all topics:

sum(rate(redpanda_kafka_request_bytes_total{redpanda_request="consume"} [5m] )) by (redpanda_request)

Latency

Latency should be consistent between produce and fetch sides. It should also be consistent over time. Take periodic snapshots of produce and fetch latencies, including at upper percentiles (95%, 99%), and watch out for significant changes over a short duration.

In Redpanda, the latency of produce and fetch requests includes the latency of inter-broker RPCs that are born from Redpanda’s internal implementation using Raft.

Kafka consumer latency

To monitor Kafka consumer request latency, use the redpanda_kafka_request_latency_seconds histogram with the label redpanda_request="consume". For example, create a query for the 99th percentile:

histogram_quantile(0.99, sum(rate(redpanda_kafka_request_latency_seconds_bucket{redpanda_request="consume"}[5m])) by (le, provider, region, instance, namespace, pod))

You can monitor the rate of Kafka consumer requests using redpanda_kafka_request_latency_seconds_count with the redpanda_request="consume" label:

rate(redpanda_kafka_request_latency_seconds_count{redpanda_request="consume"}[5m])

Kafka producer latency

To monitor Kafka producer request latency, use the redpanda_kafka_request_latency_seconds histogram with the redpanda_request="produce" label. For example, create a query for the 99th percentile:

histogram_quantile(0.99, sum(rate(redpanda_kafka_request_latency_seconds_bucket{redpanda_request="produce"}[5m])) by (le, provider, region, instance, namespace, pod))

You can monitor the rate of Kafka producer requests with redpanda_kafka_request_latency_seconds_count with the redpanda_request="produce" label:

rate(redpanda_kafka_request_latency_seconds_count{redpanda_request="produce"}[5m])

Internal RPC latency

To monitor Redpanda internal RPC latency, use the redpanda_rpc_request_latency_seconds histogram with the redpanda_server="internal" label. For example, create a query for the 99th percentile latency:

histogram_quantile(0.99, (sum(rate(redpanda_rpc_request_latency_seconds_bucket{redpanda_server="internal"}[5m])) by (le, provider, region, instance, namespace, pod)))

You can monitor the rate of internal RPC requests with redpanda_rpc_request_latency_seconds histogram’s count:

rate(redpanda_rpc_request_latency_seconds_count[5m])

Partition health

The health of Kafka partitions often reflects the health of the brokers that host them. Thus, when alerts occur for conditions such as under-replicated partitions or more frequent leadership transfers, check for unresponsive or unavailable brokers.

With Redpanda’s internal implementation of the Raft consensus protocol, the health of partitions is also reflected in any errors in the internal RPCs exchanged between Raft peers.

Leadership changes

Stable clusters have a consistent balance of leaders across all brokers, with few to no leadership transfers between brokers.

To observe changes in leadership, monitor the redpanda_raft_leadership_changes counter. For example, use a query to get the total rate of increase of leadership changes for a cluster:

sum(rate(redpanda_raft_leadership_changes[5m]))

Under-replicated partitions

A healthy cluster has partition data fully replicated across its brokers.

An under-replicated partition is at higher risk of data loss. It also adds latency because messages must be replicated before being committed. To know when a partition isn’t fully replicated, create an alert for the redpanda_kafka_under_replicated_replicas gauge when it is greater than zero:

redpanda_kafka_under_replicated_replicas > 0

Under-replication can be caused by unresponsive brokers. When an alert on redpanda_kafka_under_replicated_replicas is triggered, identify the problem brokers and examine their logs.

Leaderless partitions

A healthy cluster has a leader for every partition.

A partition without a leader cannot exchange messages with producers or consumers. To identify when a partition doesn’t have a leader, create an alert for the redpanda_cluster_unavailable_partitions gauge when it is greater than zero:

redpanda_cluster_unavailable_partitions > 0

Leaderless partitions can be caused by unresponsive brokers. When an alert on redpanda_cluster_unavailable_partitions is triggered, identify the problem brokers and examine their logs.

Raft RPCs

Redpanda’s Raft implementation exchanges periodic status RPCs between a broker and its peers. The redpanda_node_status_rpcs_timed_out gauge increases when a status RPC times out for a peer, which indicates that a peer may be unresponsive and may lead to problems with partition replication that Raft manages. Monitor for non-zero values of this gauge, and correlate it with any logged errors or changes in partition replication.

Consumer group lag

Consumer group lag is an important performance indicator that measures the difference between the broker’s latest (max) offset and the consumer group’s last committed offset. The lag indicates how current the consumed data is relative to real-time production. A high or increasing lag means that consumers are processing messages slower than producers are generating them. A decreasing or stable lag implies that consumers are keeping pace with producers, ensuring real-time or near-real-time data consumption.

By monitoring consumer lag, you can identify performance bottlenecks and make informed decisions about scaling consumers, tuning configurations, and improving processing efficiency.

A high maximum lag may indicate that a consumer is experiencing connectivity problems or cannot keep up with the incoming workload.

A high or increasing total lag (lag sum) suggests that the consumer group lacks sufficient resources to process messages at the rate they are produced. In such cases, scaling the number of consumers within the group can help, but only up to the number of partitions available in the topic. If lag persists despite increasing consumers, repartitioning the topic may be necessary to distribute the workload more effectively and improve processing efficiency.

Redpanda provides the following methods for monitoring consumer group lag:

  • Dedicated gauges: Redpanda brokers can internally calculate consumer group lag and expose two dedicated gauges. This method is recommended for environments where your observability platform does not support complex queries required to calculate the lag from offset metrics.

    Enabling these gauges may add a small amount of additional processing overhead to the brokers.

  • Offset-based calculation: You can use your observability platform to calculate consumer group lag from offset metrics. Use this method if your observability platform supports functions, such as max(), and you prefer to avoid additional processing overhead on the broker.

Dedicated gauges

Redpanda can internally calculate consumer group lag and expose it as two dedicated gauges.

  • redpanda_kafka_consumer_group_lag_max: Reports the maximum lag observed among all partitions for a consumer group. This metric helps pinpoint the partition with the greatest delay, indicating potential performance or configuration issues.

  • redpanda_kafka_consumer_group_lag_sum: Aggregates the lag across all partitions, providing an overall view of data consumption delay for the consumer group.

To enable these dedicated gauges, you must enable consumer group metrics in your cluster properties. Add the following to your Redpanda configuration:

  • enable_consumer_group_metrics: A list of properties to enable for consumer group metrics. You must add the consumer_lag property to enable consumer group lag metrics.

    Set this value equal to the scrape interval of your metrics collection system. Aligning these intervals ensures synchronized data collection, reducing the likelihood of missing or misaligned lag measurements.

For example:

rpk cluster config set enable_consumer_group_metrics '["group", "partition", "consumer_lag"]'

When these properties are enabled, Redpanda computes and exposes the redpanda_kafka_consumer_group_lag_max and redpanda_kafka_consumer_group_lag_sum gauges to the /public_metrics endpoint.

Offset-based calculation

If your environment is sensitive to the performance overhead of the dedicated gauges, use the offset-based calculation method to calculate consumer group lag. This method requires your observability platform to support functions like max().

Redpanda provides two metrics to calculate consumer group lag:

For example, here’s a typical query to compute consumer lag:

max by(redpanda_namespace, redpanda_topic, redpanda_partition)(redpanda_kafka_max_offset{redpanda_namespace="kafka"}) - on(redpanda_topic, redpanda_partition) group_right max by(redpanda_group, redpanda_topic, redpanda_partition)(redpanda_kafka_consumer_group_committed_offset)

Services

Monitor the health of specific Redpanda services with the following metrics.

Schema Registry

Schema Registry request latency:

histogram_quantile(0.99, (sum(rate(redpanda_schema_registry_request_latency_seconds_bucket[5m])) by (le, provider, region, instance, namespace, pod)))

Schema Registry request rate:

rate(redpanda_schema_registry_request_latency_seconds_count[5m]) + sum without(redpanda_status)(rate(redpanda_schema_registry_request_errors_total[5m]))

Schema Registry request error rate:

rate(redpanda_schema_registry_request_errors_total[5m])

REST proxy

REST proxy request latency:

histogram_quantile(0.99, (sum(rate(redpanda_rest_proxy_request_latency_seconds_bucket[5m])) by (le, provider, region, instance, namespace, pod)))

REST proxy request rate:

rate(redpanda_rest_proxy_request_latency_seconds_count[5m]) + sum without(redpanda_status)(rate(redpanda_rest_proxy_request_errors_total[5m]))

REST proxy request error rate:

rate(redpanda_rest_proxy_request_errors_total[5m])

Data transforms

References