Dual Stack bind and communication

I have found this question with the only reference for dual stack in consul.

I am testing out on how to enable a dual-stack consul cluster, that can support ipv4 only, dual-stacked, and ipv6 only hosts.

After digging through the code it appears that if you have multiple public IPv6 addresses, it’s not sufficient to use bind = "[::]", you must also include an advertise config option, even if you’d specify advertise_addr_ipv4 and advertise_addr_ipv6 separately.

However, from what I can see, advertise only accepts one single entry, and when I put it on the ipv6 address, it was not able to communicate with the ipv4-only servers in the consul cluster…

Is there any sane way to support the setup, where

  • It binds to all addresses available (done: bind = "[::]")
  • It advertises both v4/v6 public interfaces: (done:
    • advertise_addr_ipv4 = "{{ GetDefaultInterfaces | include \"type\" \"ipv4\" | attr \"address\" }}"
    • advertise_addr_ipv6 = "{{ GetDefaultInterfaces | exclude \"rfc\" \"4291\" | include \"type\" \"ipv6\" | attr \"address\" }}")
  • It can communicate over both addresses to the other actors. (???)

I think there is a problem with the ipv6 and advertise altogether, regardless if you mix ipv4 in the rest of the infrastructure.

I try to use it and I noticed in the logs, if the advertise (even through bind) ends up an ipv6 ip, once everything is setup and all servers/clients joined (from --join ) then the rest of the discovery updates are failing with a malformed lookup ipv6 without brackets and port suffixed

if you try to set brackets in advertise, it fails on boot trying to pass it to addvertise_addr rather than getting down to advertise_addr_ipv6

that’s how far I understand at the moment…still investigating the behavior.
Hope this helps

After some thinking I think I figured out, that a consul cluster will never be able to support both ipv4 and IPv6 only, only one of these in combination with optional dual stack, because all nodes need to be able to join the gossip, which (as far I understand) is a mesh, so if you have a v4-only and v4-only partition, those are not able to communicate.

The only way to get this somehow to existing consul setup, is to use the network segments of consul enterprise to group them into IPv6-incapable and IPv6-incapable, because the lack of connectivity between those acts like a firewall, which is more or less exactly what segments are used for.

For OSS its like a separate consul cluster in each region if you are going down this road. :expressionless:

(I would really love the feature of network segments come into consul OSS, but in the latest hashi conf - IIRC - this was commented to be not currently in focus for that)

1 Like