Nomad host logs and metrics using vector, Loki (Grafana cloud)

Hello,

Following this great tutorial «Logging on Nomad and log aggregation with Loki» with some modifications, i now have a working nomad system job pushing all my docker containers logs to Loki (hosted on Grafana cloud).

Here is my working vector configuration (fixed some things not working anymore since the blog post - July 2021). Hope it helps anyone passing by :

      # template with Vector's configuration
      template {
        destination = "local/vector.toml"
        change_mode   = "signal"
        change_signal = "SIGHUP"
        # overriding the delimiters to [[ ]] to avoid conflicts with Vector's native templating, which also uses {{ }}
        left_delimiter = "[["
        right_delimiter = "]]"
        data=<<EOH
[[- with nomadVar "nomad/jobs/vector/vector/vector" -]]
          data_dir = "alloc/data/vector/"
          [api]
            enabled = false
            #address = "0.0.0.0:8686"
            #playground = true
          [sources.logs]
            type = "docker_logs"
          [transforms.message_to_json]
            type = "remap"
            inputs = ["logs"]
            source = ".message = parse_json!(.message)"
          [sinks.out]
            type = "console"
            inputs = [ "message_to_json" ]
            encoding.codec = "json"
          [sinks.loki]
            type = "loki"
            inputs = ["message_to_json"]
            endpoint = "https://[[.user]]:[[.password]]@logs-prod-eu-west-0.grafana.net"
            compression = "snappy"
            encoding.codec = "json"
            healthcheck.enabled = true

            # remove fields that have been converted to labels to avoid having the field twice
            remove_label_fields = true
              [sinks.loki.labels]
              # See https://vector.dev/docs/reference/vrl/expressions/#path-example-nested-path
              job = "{{label.\"com.hashicorp.nomad.job_name\" }}"
              task = "{{label.\"com.hashicorp.nomad.task_name\" }}"
              group = "{{label.\"com.hashicorp.nomad.task_group_name\" }}"
              #namespace = "{{label.\"com.hashicorp.nomad.namespace\" }}"
              node = "{{label.\"com.hashicorp.nomad.node_name\" }}"
              correlation_id = "{{ message.requestId }}"
[[- end -]]
        EOH
      }

** Notes** :

  • Vector:0.26.0 / Nomad : 1.4.3 / Loki (Grafana cloud)
  • I updated the endpoint since the vector loki sink already suffixes the endpoint with /loki/api/v1/push. You may need to adapt the url too
  • I updated the labels section cause it was not working anymore
  • I disabled the api (a personal choice) => the job definition also needs to change (network, service …)
  • I commented the namespace label
  • I enabled the snappy compression
  • I added a custom message_to_json transform to transform the stringified message field pushed by my apps. You can remove it and use the logs input for sinks.out and sinks.loki
  • I’m using nomad variables so make sure to have them «put» to your nomad cluster.
  • Adding the docker-sock-ro host_volume ACL policy seems not mandatory for it to work. In fact i did try with it but it works without. If you enable the ACL, be sure to bootstrap your nomad cluster accordingly.

Ok so far so good, but now i’d like to push my host logs (using sytemd/journald) and my host nomad metrics to my grafana instance. I see a few options and would like some help/guidance toward that goal :

  1. (Option 1) I can have a vector agent running on every hosts to push my journald logs and my nomad metrics. I started testing that and it works. I’ll need to deal with a secure way to store/get the grafana cloud credentials + adding cpu consumption to have vector (already running in a container) also run on the host (maybe not a real issue … however the second options with nomad job give me an option to simply limit the cpu/memory of the vector job). Moreover, Vector does not seem to have tuning option and can/will consume all my host CPU.
  2. (Option 2) Since I already have a running vector container on all my nomad node, can I use this same vector job/container to also push the host metrics and journalctl logs. I understand that there will be some security drawbacks to access the host data (eg. via host_volumes) but want to evaluate this option. I did notice that my journalctl logs are located in /var/log/journal on my host and they seem encrypted. I can see them via the journalctl command and a privileged user but not manually (cat/less) with the same user.
    Also, what about exposing the host nomad metrics to Grafana from this already running job/container.
  3. I’m open to other ideas. Like not using vector (CPU intensive). Memory is less problematic for me than CPU … maybe time to look for an alternative and go with option 1.

Context :

  • I will have a small cluster (3 nodes as client/server) X2 (dev & prod). Can’t afford much as i’m starting. I can add a dedicated cpu to all my machines for log/metrics processing but not too much. One of my exposed api is kinda cpu intensive (image processing).
  • I do not have vault/consul at that stage and don’t want to manage them or any other solution as i’m writing this. Still evaluating cloud secret manager solutions but seems too pricey for now (HCP Vault / GCp / AWS / Azure …)
    Any help would be greatly appreciated
    Thanks in advance for your help.

Hi Brahim,

Regarding Vector, seeing it’s running in Nomad you can limit the amount of resources, including it consumes via the resources stanza.

For the storage of the Grafana cloud credentials, you can use the new Nomad Variables.

You can collect host metrics and logs by mounting the respective folders (e.g. the /var/log/journal and for the metrics /proc, /sys, etc. For the metrics part you can use netdata in Docker as inspiration for the mounts required), or use the raw_exec task driver to run Vector/any agent you want natively on the host OS, without any isolation (use with care!), but still orchestrated by Nomad.

Hi @adrian.todorov and thanks for your feedback,

Since my first post, i went with option 2 and have a working solution with all running in the vector (docker) task and push to loki (logs) and prometheus (metrics) :

  • my app logs
  • journald (systemd) logs
  • nomad host metrics

I encountered a lot of small hurdles so i’m going to post here my final working config with some notes.

Thanks again

Hello,

I finally managed to have all my logs and nomad metrics through vector running in a docker container (via nomad).
I copy/paste below my nomad client config (here it is also setup as server) to see what you need to mount. You’ll also find my vector job config file. I hope it will help someone.

Some notes :

  • For journald logs to be accessible from the vector container, you need to mount /var/log/journal and give your container the same machine-id. It means, also mounting the /etc/machine-id directory (see below)
  • To access nomad host from the vector container, you may need to add this extra args (-add-host host.docker.internal:host-gateway) to the docker config
  • Some nomad metrics are not accessible (depends on your system …). In my case, nomad_client_allocs_memory_rss is not accessible.

PS :

  • I use terraform so some variables are passed/replaced by terraform
  • My host machine is running Debian 11.6

My nomad.hcl config


# Full configuration options can be found at https://www.nomadproject.io/docs/configuration

#datacenter = "dc1"
data_dir  = "/opt/nomad/data"

# Bind on tailscale interface
bind_addr = "{{ GetInterfaceIP \"tailscale0\" }}"

# See https://developer.hashicorp.com/nomad/tutorials/access-control/access-control-bootstrap
#acl {
#  enabled    = true
#}

telemetry {
  collection_interval = "15s"
  disable_hostname = true
  prometheus_metrics = true
  publish_allocation_metrics = true
  publish_node_metrics = true
}

server {
  enabled          = ${server?"true":"false"}  
  default_scheduler_config {
    memory_oversubscription_enabled = true
  }
  bootstrap_expect=${bootstrap_expect}
}

client {
  enabled = ${client?"true":"false"}
  host_network "tailscale" {
    interface = "tailscale0"
    reserved_ports = "${reserved_ports}"
  }

  # Used for docker logs
  host_volume "docker-sock-ro" {
    path = "/var/run/docker.sock"
    read_only = true
  }

  # Used for host systemd logs
  host_volume "journald-ro" {
    path = "/var/log/journal"
    read_only = true
  }
  host_volume "machineid-ro" {
    path = "/etc/machine-id"
    read_only = true
  }
}

plugin "docker" {
  config {
    # extra Docker labels to be set by Nomad on each Docker container with the appropriate value
    extra_labels = ["job_name", "task_group_name", "task_name", "namespace", "node_name"]
  }
}

/*consul {
  address = "{{ GetInterfaceIP \"tailscale0\" }}:8500"
}*/

My vector job file (passed to the nomad_job terraform resource)

job "vector" {
  datacenters = ["dc1"]
  # system job, runs on all nodes
  type = "system"
  update {
    min_healthy_time = "10s"
    healthy_deadline = "5m"
    progress_deadline = "10m"
    auto_revert = true
  }
  group "vector" {
    count = 1
    restart {
      attempts = 3
      interval = "10m"
      delay = "30s"
      mode = "fail"
    }
    # docker socket volume
    volume "docker-sock" {
      type = "host"
      source = "docker-sock-ro"
      read_only = true
    }
    volume "journald" {
      type = "host"
      source = "journald-ro"
      read_only = true
    }
    volume "machineid" {
      type = "host"
      source = "machineid-ro"
      read_only = true
    }
    ephemeral_disk {
      size    = 500 # 500 MB
      sticky  = true
    }
    task "vector" {
      driver = "docker"
      config {
        image = "timberio/vector:0.26.0-debian"
      }
      # docker socket volume mount
      volume_mount {
        volume = "docker-sock"
        destination = "/var/run/docker.sock"
        read_only = true
      }
      volume_mount {
        volume = "journald"
        destination = "/var/log/journal"
        read_only = true
      }
      volume_mount {
        volume = "machineid"
        destination = "/etc/machine-id"
        read_only = true
      }
      # Vector won't start unless the sinks(backends) configured are healthy
      env {
        VECTOR_CONFIG = "local/vector.toml"
        VECTOR_REQUIRE_HEALTHY = "true"
      }
      # resource limits are a good idea because you don't want your log collection to consume all resources available
      resources {
        cpu    = 500 # MHz
        memory = 256 # MB
      }
      # template with Vector's configuration
      template {
        destination = "local/vector.toml"
        change_mode   = "signal"
        change_signal = "SIGHUP"
        # overriding the delimiters to [[ ]] to avoid conflicts with Vector's native templating, which also uses {{ }}
        left_delimiter = "[["
        right_delimiter = "]]"
        data=<<EOH
[[- with nomadVar "nomad/jobs/vector/vector/vector" -]]
          data_dir = "alloc/data/"
          [api]
            enabled = false
          [sources.host_journald_logs]
            type = "journald"
            current_boot_only = true
            since_now = true
            include_units = []
            # Warning and above
            include_matches.PRIORITY = [ "0", "1", "2", "3", "4" ]
          [sources.logs]
            type = "docker_logs"
          [transforms.apps_logs]
            type = "remap"
            inputs = ["logs"]
            source = ".message = parse_json!(.message)"
          [sources.nomad_host_metrics]
            type = "prometheus_scrape"
            endpoints = [ "http://${nomad_host_tailnet_ip}/v1/metrics?format=prometheus" ]
            scrape_interval_secs = 15
            instance_tag = "instance"
            endpoint_tag = "endpoint"
[[ if eq "${environment}" "dev" ]]
          [sinks.out]
            type = "console"
            inputs = [ "apps_logs", "host_journald_logs", "nomad_host_metrics" ]
            encoding.codec = "json"
[[ end ]]
          [sinks.prometheus]
            type = "prometheus_remote_write"
            inputs = [ "nomad_host_metrics" ]
            endpoint = "https://prometheus-prod-01-eu-west-0.grafana.net/api/prom/push"
            healthcheck.enabled = false
            auth.strategy = "basic"
            auth.user = "[[.prometheus_user]]"
            auth.password = "[[.prometheus_password]]"
          [sinks.loki]
            type = "loki"
            inputs = ["apps_logs", "host_journald_logs"]
            endpoint = "https://[[.loki_user]]:[[.loki_password]]@logs-prod-eu-west-0.grafana.net"
            compression = "snappy"
            encoding.codec = "json"
            healthcheck.enabled = true

            # remove fields that have been converted to labels to avoid having the field twice
            remove_label_fields = true
              [sinks.loki.labels]
              # See https://vector.dev/docs/reference/vrl/expressions/#path-example-quoted-path
              job = "{{label.\"com.hashicorp.nomad.job_name\" }}"
              task = "{{label.\"com.hashicorp.nomad.task_name\" }}"
              group = "{{label.\"com.hashicorp.nomad.task_group_name\" }}"
              #namespace = "{{label.\"com.hashicorp.nomad.namespace\" }}"
              node = "{{label.\"com.hashicorp.nomad.node_name\" }}"
              correlation_id = "{{ message.requestId }}"
[[- end -]]
        EOH
      }
      kill_timeout = "30s"
    }
  }
}

Regarding the nomad dashboards, i started with the nomad integration provided by Grafana and did some adjustments.

I did not share here my HaProxy and apps logs/metrics but you get the idea.

Hope it helps,
Best regards,

Brahim

1 Like