Get Nomad job logs into Splunk/Elasticsearch

Whats the recomended way to get docker logs into both the nomad cli & gui and an external logging facility like ELK?

The following works, but breaks nomad logs cli and the nomad gui

 logging {
          type = "syslog"
          config {
            "syslog-address" =  "udp://127.0.0.1:514"
            "syslog-facility" = "local4"
            "tag" = "foobar"
          }
        }

Here some people recommend using the sidecar pattern to run ‘filebeat’, ‘logstash’, ‘fluentd’, ‘vector.dev’ along side the nomad job.

I’d rather not duplicate logging in every nomad job, and instead have an agent running on every nomad agent that forwards all logs. In order to do that, how would you correlate the allocation to a service?

How have others centralized nomad logs?

Hey Spencer,

We use SumoLogic in our environment, so for now I’m using the vendor provided Docker image for logging containers. I run it as a system job on all Nomad clients. The config basically mounts the docker socket, so the container can gets logs via docker’s API.

This approach lets me configure labels in the Nomad docker job. The sumo agent is looking for docker labels that correlate environment, app name, etc… to categorize the logs per container.

I don’t have to touch the default docker json-file logging approach, or touch any logging options in the Nomad jobs. It also does not impact nomad’s internal logging for jobs. So “nomad alloc logs” commands will still work just fine.

I believe something similar could be setup with Filebeat as a system job, but I haven’t tried as we don’t use Elastic for logs. Cursory glance at the docs looks like you can probably configure this with “autodiscover hints”

https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html

1 Like

Hello!
One approach that I have successfully implemented is to use Filebeat installed in the Nomad client node machine.

In this context Filebeat uses the Docker input to capture logs of all the containers running in that node: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-docker.html

I added a rule in the filebeat configuration to only capture the log of containers that have the label logging=True (i.e.: This was a convention created by me to control what containers should have their logs forwarded to Logstash - drop event when the label is not present): https://www.elastic.co/guide/en/beats/filebeat/current/drop-event.html
https://www.nomadproject.io/docs/drivers/docker.html#labels

To install Filebeat in the client Nomad node, you can either install it directly on the machine or run Filebeat as System Jobs that will run in every client node, which is more convenient.

Another option is to run nomad_follower as a system job – I did some work on it recently to allow it to work on locked down clusters, use temporary vault credentials to fetch allocations and so forth. Since it uses a Nomad token rather than the docker socket it’s a little more secure and it write out the logs to a host_volume mounted directory (so you do need Nomad 0.10.0).

Here’s my fork with the fixes:

Good luck!