Monitoring sidecar and gateways using prometheus

Hello everyone, i have a consul cluster with nomad and i need monitoring traffic between services. I watched this article :

Configure metrics for Consul on Kubernetes | Consul | HashiCorp Developer

But i not using kubernetes. Are really using this feature without him? if it possible, tell me please. :wink:

I think I did something along the lines of this… Anything you can use?

Nomad:

group {
  # blah, blah..
  network {
    mode = "bridge"
    port "metrics_envoy" {to = 9102}
service {
  # blah, blah..
  meta {
    # Tag for prometheus scrape-targeting via consul (envoy)
    metrics_port_envoy = "${NOMAD_HOST_PORT_metrics_envoy}"
  }
  connect {
    sidecar_service {
      proxy {
        config {
          # Expose metrics for prometheus (envoy)
          envoy_prometheus_bind_addr = "0.0.0.0:9102"
        }

Prometheus:

# blah, blah..
scrape_configs:
- job_name: consul-connect-envoy
  consul_sd_configs:
  - server: 'http://172.17.0.1:8500'
  relabel_configs:
  - source_labels: [__meta_consul_service]
    regex: (.+)-sidecar-proxy
    action: drop
  - source_labels: [__meta_consul_service_metadata_metrics_port_envoy]
    regex: (.+)
    action: keep
  - source_labels: [__address__,__meta_consul_service_metadata_metrics_port_envoy]
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: ${1}:${2}
    target_label: __address__

EDIT:
“$${NOMAD_HOST_PORT_metrics_envoy}” should be ${NOMAD_HOST_PORT_metrics_envoy}"

Thank you for the reply, i try this, maybe i made anything wrong?
my nomad job:

job "countdash" {
  datacenters = ["test"]
  group "api" {
    network {
      mode = "bridge"
      port "metrics_envoy" {
        to = 9102
      }
    }

    service {
      name = "test-metrics"
      port = "9001"
      meta {
        metrics_port_envoy = "$${NOMAD_HOST_PORT_metrics_envoy}"
      }
      connect {
        sidecar_service {
          proxy {
            config {
              # Expose metrics for prometheus (envoy)
              envoy_prometheus_bind_addr = "0.0.0.0:9102"
            }
          }
        }
      }

    }

    task "web" {
      driver = "docker"
      config {
        image = "nexus.my.domain:8083/counter-api:v1"
      }
    }
  }


}

In consul:


Prometheus:

On nomad’s host are’n opened port 9001 I need metrics from only envoy, but not service, envoy does’not registered on consul. if I understand correctly, envoy will`not register on consul. But, in nomad i watch envoy like separate container:
image
Can i get metrics from envoy but not from service?

I had a typo that you copied, it should be “${NOMAD_HOST_PORT_metrics_envoy}” (single dollar). Might be the problem

Explanation:

  1. “port “metrics_envoy” {to = 9102}” maps a named port to port 9102 in the network-bridge for the deployment group & exposes as a dynamic port on the host.
  2. "metrics_port_envoy = “${NOMAD_HOST_PORT_metrics_envoy}” sets a value you can lookup in the prometheus-job (metrics_port_envoy) and maps the value to the value of the dynamic port assigned in previous step (will change w/each deployment).
  3. "envoy_prometheus_bind_addr = “0.0.0.0:9102” configures prometheus for the envoy-task itself to bind on 9102 on the network-bridge. See Consul documentation.
  4. In Prometheus, you lookup the consul parameter from step 2 to read the host & port for the exported envoy metrics target.

EDIT:
You should also be able to access the dynamic port from step 1 on the host directly (http://<host-ip>:<dynamic-port>/metrics) to check if it works. Nomad will display the assigned port in the UI after deployment. I see yo are scraping port 9001, which is not the envoy metrics port, but the service-port, so I think you didn’t use the prometheus config I showed

2 Likes

It helps me
Thank you!

1 Like

Thanks @runeron and @IvanNazarenko for the open discussion and solution for this issue, this helped me out so much and I was almost convinced that this feature was supported only in k8s. But I’ve got it working now too!

2 Likes