Automatic Prometheus metrics discovery with Docker labels
Vincent Bernat
Akvorado, a network flow collector, relies on Traefik, a reverse HTTP proxy, to expose HTTP endpoints for its Docker Compose services. Docker labels attached to each service define the routing rules. Traefik picks them up automatically when a container starts. Instead of maintaining a static configuration file to collect Prometheus metrics, we apply the same approach with Grafana Alloy.
Traefik & Docker#
Traefik listens for events on the Docker socket. Each service advertises its configuration through labels. For example, here is the Loki service in Akvorado:
services: loki: # … expose: - 3100/tcp labels: - traefik.enable=true - traefik.http.routers.loki.rule=PathPrefix(`/loki`)
Once the container is healthy, Traefik creates a router forwarding requests
matching /loki to its first exposed port. Colocating Traefik configuration
with the service definition is attractive. How do we achieve the same for
Prometheus metrics?
Metrics discovery with Alloy#
Grafana Alloy, a metrics collector that scrapes Prometheus endpoints,
includes a discovery.docker component. Just like Traefik,
it connects to the Docker socket.1 With a few relabeling rules, we teach
it to use Docker labels to locate and scrape metrics.
We define three labels on each service:
metrics.enableset totrueenables metrics collection,metrics.portspecifies the port exposing the Prometheus endpoint, andmetrics.pathspecifies the path to the metrics endpoint.
If a service exposes more than one port, metrics.port is mandatory. Otherwise,
it defaults to the only exposed port. The default value for metrics.path is
/metrics. The Loki service from earlier becomes:
services: loki: # … expose: - 3100/tcp labels: - traefik.enable=true - traefik.http.routers.loki.rule=PathPrefix(`/loki`) - metrics.enable=true - metrics.path=/loki/metrics
Alloy’s configuration is split into four parts:
- discover containers through the Docker socket,
- filter and relabel targets using Docker labels,
- scrape the matching endpoints, and
- forward the metrics to Prometheus.
Discovering Docker containers#
The first building block discovers running containers:
discovery.docker "docker" { host = "unix:///var/run/docker.sock" refresh_interval = "30s" filter { name = "label" values = ["com.docker.compose.project=akvorado"] } }
This connects to the Docker socket and lists containers every 30
seconds.2 The filter block restricts discovery to containers belonging
to the akvorado project, avoiding interference with unrelated containers on
the same host. For each discovered container, Alloy produces a target with
labels such as __meta_docker_container_label_metrics_port for the
metrics.port Docker label.
Relabeling targets#
The relabeling step filters and transforms raw targets from Docker discovery
into scrape targets. The first stage keeps only targets with metrics.enable
set to true:
discovery.relabel "prometheus" { targets = discovery.docker.docker.targets // Keep only targets with metrics.enable=true rule { source_labels = ["__meta_docker_container_label_metrics_enable"] regex = `true` action = "keep" } // … }
The second stage overrides the discovered port when the service defines
metrics.port:
// When metrics.port is set, override __address__. rule { source_labels = ["__address__", "__meta_docker_container_label_metrics_port"] regex = `(.+):\d+;(.+)` target_label = "__address__" replacement = "$1:$2" }
Next, we handle containers in host network mode. When
__meta_docker_network_name equals host, Alloy rewrites the address to
host.docker.internal instead of localhost:3
// When host networking, override __address__ to host.docker.internal. rule { source_labels = ["__meta_docker_container_label_metrics_port", "__meta_docker_network_name"] regex = `(.+);host` target_label = "__address__" replacement = "host.docker.internal:$1" }
The next stage derives the job name from the service name, stripping any numbered suffix. The instance label is the address without the port:
rule { source_labels = ["__meta_docker_container_label_com_docker_compose_service"] regex = `(.+)(?:-\d+)?` target_label = "job" } rule { source_labels = ["__address__"] regex = `(.+):\d+` target_label = "instance" }
If a container defines metrics.path, Alloy uses it. Otherwise, it defaults to
/metrics:
rule { source_labels = ["__meta_docker_container_label_metrics_path"] regex = `(.+)` target_label = "__metrics_path__" } rule { source_labels = ["__metrics_path__"] regex = "" target_label = "__metrics_path__" replacement = "/metrics" }
Scraping and forwarding#
With the targets properly relabeled, scraping and forwarding are straightforward:
prometheus.scrape "docker" { targets = discovery.relabel.prometheus.output forward_to = [prometheus.remote_write.default.receiver] scrape_interval = "30s" } prometheus.remote_write "default" { endpoint { url = "http://prometheus:9090/api/v1/write" } }
prometheus.scrape periodically fetches metrics from the discovered targets.
prometheus.remote_write sends them to Prometheus.
Built-in exporters#
Some services do not expose a Prometheus endpoint. Redis and Kafka are common examples. Alloy ships built-in Prometheus exporters that query these services and expose metrics on their behalf.
prometheus.exporter.redis "docker" { redis_addr = "redis:6379" } discovery.relabel "redis" { targets = prometheus.exporter.redis.docker.targets rule { target_label = "job" replacement = "redis" } } prometheus.scrape "redis" { targets = discovery.relabel.redis.output forward_to = [prometheus.remote_write.default.receiver] scrape_interval = "30s" }
The same pattern applies to Kafka:
prometheus.exporter.kafka "docker" { kafka_uris = ["kafka:9092"] } discovery.relabel "kafka" { targets = prometheus.exporter.kafka.docker.targets rule { target_label = "job" replacement = "kafka" } } prometheus.scrape "kafka" { targets = discovery.relabel.kafka.output forward_to = [prometheus.remote_write.default.receiver] scrape_interval = "30s" }
Each exporter is a separate component with its own relabeling and scrape
configuration. We set the job label explicitly since no Docker metadata can
provide it.
With this setup, adding metrics to a new service with a Prometheus endpoint
requires only a few labels in docker-compose.yml, just like adding a Traefik
route. Alloy picks it up automatically. You can apply the same pattern with
another discovery method, like discovery.kubernetes,
discovery.scaleway, or discovery.http. 🩺