Filebeat only harvesting its own logs and not every other jobs running on the same machine

Hi Team,

I am using filebeat:7.17.12 installed as a system job in order to collect logs from every single instance deployed on my cluster but so far I am only collecting logs from filebeat itself, I tried using the docker autodiscover but it fails with a permission issue which I haven’t been able to resolve too…

Please need help to collect logs from all running nomad jobs, thanks!

sharing my configuration here:

Nomad Version: 1.7.2
Filebeat Version: 7.17.12

job "filebeat" {
  datacenters = ["dc1"]
  type        = "system"

  

  group "filebeat" {
    task "filebeat" {
      driver = "docker"

      config {
        image      = "docker.elastic.co/beats/filebeat:7.17.12"
        privileged = true


        args = [
          "-e",
          "-strict.perms=false"
        ]

        volumes = [
          "local/filebeat.yml:/usr/share/filebeat/filebeat.yml",
          "/var/lib/docker/containers:/var/lib/docker/containers",
          "/var/run/docker.sock:/var/run/docker.sock",
        ]
      }


      resources {
        cpu    = 50
        memory = 128
      }



env {
    NOMAD_NODE_NAME = "${node.unique.name}"

}



  template {
  data = <<EOF
filebeat.config:
  logging.level: debug
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false


filebeat.autodiscover:
  providers:
    - type: nomad
      address: "http://{{env "attr.unique.network.ip-address"}}:4646"
      secret_id: "{{ with nomadVar "nomad/jobs" }}{{ .nomad_token }}{{ end }}"
      hints.enabled: true
      scope: node
      node: "{{env "NOMAD_NODE_NAME"}}"
      templates:
        - condition:
            equals:
              nomad.namespace: default
          config:
          - type: log
            paths:
              - "/opt/nomad/data/alloc/${NOMAD_ALLOC_ID}/alloc/logs/${NOMAD_TASK_NAME}.stderr.[0-9]*"
              - "/opt/nomad/data/alloc/${NOMAD_ALLOC_ID}/alloc/logs/${NOMAD_TASK_NAME}.stdout.[0-9]*"

processors:
  - add_cloud_metadata: ~

output.elasticsearch:
  hosts: [{{range service "elasticsearch"}}"http://{{ .Address }}:9200"{{end}}]
  username: "elastic"
  password: "strong_PASSWORD_123"
EOF

  destination = "local/filebeat.yml"
}



    }
  }
}

Hi @andresogando10, at first glance it looks like you have filebeat configured in such a way it will only autodiscover its own logs,

  paths:
    - "/opt/nomad/data/alloc/${NOMAD_ALLOC_ID}/alloc/logs/${NOMAD_TASK_NAME}.stderr.[0-9]*"
    - "/opt/nomad/data/alloc/${NOMAD_ALLOC_ID}/alloc/logs/${NOMAD_TASK_NAME}.stdout.[0-9]*"

The ${NOMAD_ALLOC_ID} and ${NOMAD_TASK_NAME} variables are specific to the filebeat task here. Can these be wildcards instead?

1 Like

Hi Seth,
Thanks for pointing that out, I have tried without NOMAD_TASK_NAME env
just with a wildcard and it indeed harvest files but the same filebeat files

Still no solution found.

You are only collecting two files:

  paths:
    - "/opt/nomad/data/alloc/${NOMAD_ALLOC_ID}/alloc/logs/${NOMAD_TASK_NAME}.stderr.[0-9]*"
    - "/opt/nomad/data/alloc/${NOMAD_ALLOC_ID}/alloc/logs/${NOMAD_TASK_NAME}.stdout.[0-9]*"

two files.

You have to collect all files across all allocations. Something along:

  paths:
    - "/opt/nomad/data/alloc/*/alloc/logs/*.stderr.[0-9]*"
    - "/opt/nomad/data/alloc/*/alloc/logs/*.stdout.[0-9]*"

I know nothing about that config and do not know if that is a glob or regex. Consult filebeat documentation.