DNS Forwarding using systemd-resolved on AWS Ubuntu Minimal 18.04

As discussed here DNS Lookup via systemd-resolved on Ubuntu Minimal 18.04 · Issue #5875 · hashicorp/consul · GitHub I open a topic in this discussion board, sorry for a delay.

Documentation here did not work for me for several reasons that will be outlined below.

I am using AWS as a cloud platform and Ubuntu Minimal 18.04 AMI.
Consul Agent is 1.5.1 and runs as a server. Internal communication and client interfaces are bind to local IPv4 ( will be used as example) using both -bind and -client options. There are no custom DNS configuration in Consul.

dig @ -p 8600 consul.service.consul ANY

consul.service.consul. 0 IN A
consul.service.consul. 0 IN TXT “consul-network-segment=”
consul.service.consul. 0 IN A
consul.service.consul. 0 IN TXT “consul-network-segment=”
consul.service.consul. 0 IN A
consul.service.consul. 0 IN TXT “consul-network-segment=”
consul.service.consul. 0 IN A
consul.service.consul. 0 IN TXT “consul-network-segment=”
consul.service.consul. 0 IN A
consul.service.consul. 0 IN TXT “consul-network-segment=”

With CURL:

curl http://consul.service.consul

curl: (6) Could not resolve host: consul.service.consul`

Ubuntu 18.04 uses systemd-resolved as DNS forwarder and binded to

netstat -tulpn | grep LISTEN | grep systemd-resolve

tcp 0 0* LISTEN 565/systemd-resolve

/etc/resolv.conf has following configuration:

cat /etc/resolv.conf

options edns0
search eu-west-1.compute.internal

Also /etc/resolv.conf is a symlink to /run/systemd/resolve/stub-resolv.conf.

There is also /run/systemd/resolve/resolv.conf that has following config:

cat /run/systemd/resolve/resolv.conf

search eu-west-1.compute.internal

I change /etc/systemd/resolved.conf to have following:


Than I play with iptables trying to replace localhost with either or but still does not work for me.

Could you please help me to figure out.

Thank you.

My first guess is that this is a port configuration issue.

systemd-resolved allows you to configure DNS servers but assumes that they will be listening on port 53 and provides no way to configure an alternative port. It looks like from your dig commands that Consul is still configured for the default port of 8600. So you have two options. First you could change your Consul config to listen on port 53 so that systemd-resolved will connect to it properly or secondly you can use iptables to translate the port for you.

For each node running systemd-resolved the following iptables rules should properly translate port 53 to port:

$ iptables -t nat -A OUTPUT -d -p udp -m udp --dport 53 -j REDIRECT --to-ports 8600
$ iptables -t nat -A OUTPUT -d -p tcp -m tcp --dport 53 -j REDIRECT --to-ports 8600

Whenever systemd-resolved makes an outbound request to it will redirect it to There are two rules so that it works for both DNS over TCP as well as UDP.

If nothing is bound to port 53 on the nodes running your Consul servers then the easiest solution is to just change the dns port to 53 for those servers. If you are running as a non-root user then when you run Consul it must have the CAP_NET_BIND_SERVICE capability in order to bind to that port.

1 Like

Yes, this is exactly what I did:

root@ip-10-0-0-0:/home/ubuntu# iptables -L -t nat

target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
REDIRECT udp – anywhere ip-10-0-0-0.eu-west-1.compute.internal udp dpt:domain redir ports 8600
REDIRECT tcp – anywhere ip-10-0-0-0.eu-west-1.compute.internal tcp dpt:domain redir ports 8600

One thing that I see here is that instead of IP address it lists DNS, though input was IPv4, not sure if it matters.

It looks like systemd-resolved does not recognises domain .consul therefore never forwards . The reason I think is due to this documentation:

A space-separated list of domains. These domains are used as search suffixes when resolving single-label host names (domain names which contain no dot), in order to qualify them into fully-qualified domain names (FQDNs). Search domains are strictly processed in the order they are specified, until the name with the suffix appended is found. For compatibility reasons, if this setting is not specified, the search domains listed in /etc/resolv.conf are used instead, if that file exists and any domains are configured in it. This setting defaults to the empty list.
Specified domain names may optionally be prefixed with " ~ ". In this case they do not define a search path, but preferably direct DNS queries for the indicated domains to the DNS servers configured with the system DNS= setting (see above), in case additional, suitable per-link DNS servers are known. If no per-link DNS servers are known using the " ~ " syntax has no effect. Use the construct " ~. " (which is composed of " ~ " to indicate a routing domain and " . " to indicate the DNS root domain that is the implied suffix of all DNS domains) to use the system DNS server defined with DNS= preferably for all domains.

Can it be that it expects consul to resolve rather than consul.service.consul?

I really do not want to bind Consul to 53 port e.g. what will happen if Consul goes down? It means no requests will be resolved? E.g. like AWS API requests

It shouldn’t matter about the dns names in the iptables output. Similar to netstat and other commands, iptables is going to perform a reverse DNS lookup and output the name instead of the IP sometimes.

The way I read the documentation (and how I tested it to work in the past when I updated the DNS forwarding guide) was that the ~ prefix completely changes the behavior. Without the prefix its configuring a default search domain which by definition shouldn’t be applied when you have a fully qualified domain name already. In that case systemd-resolved uses the fact that the name being resolved already contains multiple DNS labels to determine the search domain should not be needed.

When the ~ prefix is used it specifies where to send queries for specific domains.

In this case they do not define a search path, but preferably direct DNS queries for the indicated domains to the DNS servers configured with the system DNS= setting (see above), in case additional, suitable per-link DNS servers are known.

I would recommend running tcpdump or wireshark on the node where the curl is failing and filter on port 53 or port 8600. If you see DNS requests for consul.service.consul being sent to then the ~consul bit would appear to be doing the correct thing and if so you can then see whether iptables is redirecting to the correct port.

From tcpdump port 53 I see following:

13:11:29.579850 IP ip-10-0-0-0.eu-west-1.compute.internal.50396 > ip-10-0-0-0.eu-west-1.compute.internal.domain: 59943+ [1au] A? consul.service.consul. (62)
13:11:29.579971 IP ip-10-0-0-0.eu-west-1.compute.internal.domain > ip-10-0-0-0.eu-west-1.compute.internal.50396: 59943 NXDomain 0/0/1 (50)

From tcpdump -i lo port 8600 I see following:

13:14:06.172664 IP Consul-Server-10-0-0-0.node.equilibrium.consul.37132 > localhost.8600: UDP, length 53

When I add DNS address to bind Consul configuration all start working fine. Still trying to figure out why it redirects to localhost but not

Is there something else bound to port 53 on the interface?

It looks like something is returning an NXDomain response in your first tcpdump output and since the domain name requested was correct (consul.service.consul.) then I would assume that request never made it to Consul but rather something else is returning the response.

As an aside, it can sort of reproduce the same behavior with having Consul bind to a single non-loopback IP and then adding the same iptables rules. I am not getting an NXDomain response but my packets are also not being routed properly.

I may have found the issue in the iptables man page:


This target is only valid in the nat table, in the PREROUTING and OUTPUT chains, and user-defined chains which are only called from those chains. It redirects the packet to the machine itself by changing the destination IP to the primary address of the incoming interface (locally-generated packets are mapped to the address). It takes one option:

–to-ports port [- port ]

This specifies a destination port or range of ports to use: without this, the destination port is never altered. This is only valid if the rule also specifies -p tcp or -p udp .

So the iptables rules are implicitly mapping the IP to which is why it works then but doesn’t if Consul is not listening on the loopback.

I tried using a DNAT rule instead of a REDIRECT as that allows you to specify the address where it redirects to but that also does not work.

You could just bind Consul to using -client in which case it would work externally coming in over the main interface as well as with the redirect rules for systemd-resolved locally. It would seem that systemd-resolved requires either Consul bound to localhost and/or port 53 and there isn’t a good solution for binding to a specific IP on port 8600.

1 Like

I went ahead and opened up a bug report for the necessary guide updates to mention the limitations of the systemd-resolved + consul integration for DNS forwarding: https://github.com/hashicorp/consul/issues/5985

1 Like

Thank you @mkeeler. tested and it works, happy we resolved it.

Just a quick question, if my application wants to use consul.service.consul via Consul Client, I still need to configures systemd-resolved and bind -client to on Consul Client?

How Consul Client will resolve DNS query? How does this communication works internally?

Thank you.

I think the answer to your first question is yes. You will need to configure systemd-resolved and Consul appropriately so that your app’s DNS requests are resolved correctly.

As for how a client resolves a DNS query its almost the same as how a server does it. The DNS server running on both a client or a server will generate an RPC request and “send” it to the servers. On a server the “sending” is just a function call whereas on a client it has to msgpack encode the request and send it over the network to the servers. The servers then generate the RPC response and “send” it back to the agent (client or server) running the DNS server.

So in the simple case 1 DNS request to a client will involve 1 RPC request from the client to the server to be made and then the DNS server will translate the RPC response into the appropriate DNS RRs to send back to the request originator. A Consul agent can be configured to use a cache for RPCs made by the DNS server in which case it will not always have to make 1 RPC per request. However when using the cache the data returned by the DNS server may be served from the cache and thus not up to date.

1 Like

Thank you @mkeeler you were super help in past days.

@mkeeler from your explanation it looks like the only port that has to be opened between client and server is RPC one, but I could not get response if HTTPS is not opened. So, when I do dig or curl on consul.service.consul and HTTPS (8501) is not opened, commands fail, when I open 8501 resolution happens. Am I missing something?

What kind of errors are you seeing? Are there differences in the logs between when its open and not?

The DNS server does not use HTTP(s) at all. Could it be that something is trying to use that interface to notify about the status of a health check and is unable to when the port is closed. In that case the health check would be deemed critical and the information wouldn’t be returned via DNS.

I think it answers this behaviour as Consul Client is used with Vault Server, and latter I think automatically registers two health checks.