Connect : NOMAD_UPSTREAM_* not populated?

Hi,

Playing for a week with nomad, consul and now connect.

I have an issue with the NOMAD_UPSTREAM_* variables, seems not populated nor available as environment variables when I connect to my container.

What did I miss ?

job "wallabag" {
    datacenters = ["dc1"]

    group "front" {
        network {
            port  "http"{
                to = 80
                host_network = "private"
            }
        }

        task "app" {
            driver = "docker"
            config {
                image = "wallabag/wallabag:2.4.0"
                ports = ["http"]
            }
            env {
                 [...]
                SYMFONY__ENV__DATABASE_HOST = "${NOMAD_UPSTREAM_IP_wallabag_postgres}"
                SYMFONY__ENV__DATABASE_PORT = "${NOMAD_UPSTREAM_PORT_wallabag_postgres}"
                SYMFONY__ENV__REDIS_HOST = "${NOMAD_UPSTREAM_IP_wallabag_redis}"
                SYMFONY__ENV__REDIS_PORT = "${NOMAD_UPSTREAM_PORT_wallabag_redis}"
            }
            service {
                name = "wallabag-app"
                port = "http"
                tags = [
                    "traefik.enable=true",
                    [...]
                ]
                check {
                    type     = "http"
                    path     = "/"
                    interval = "120s"
                    timeout  = "2s"
                }
                connect {
                    sidecar_service {
                        proxy {
                            upstreams {
                                destination_name = "wallabag-postgres"
                                local_bind_port = 5432
                            }
                            upstreams {
                                destination_name = "wallabag-redis"
                                local_bind_port = 6379
                            }
                        }
                    }
                }
            }
        }
    }

    group "back" {
        network {
            port  "postgres"{
                to = 5432
                host_network = "private"
            }
        }

        volume "pgdata" {
            type = "host"
            source = "wallabag_pgdata"
            read_only = false
        }

        task "postgres" {
            driver = "docker"
            config {
                image = "DB_IMAGE"
                auth {
                    username = "REGISTRY_USER"
                    password = "REGISTRY_PASSWORD"
                    server_address = "REGISTRY_SERVER"
                }
                ports = ["postgres"]
            }
            env {
                [...]
            }
            volume_mount {
                volume      = "pgdata"
                destination = "/var/lib/postgresql/data"
            }
            service {
                name = "wallabag-postgres"
                port = "postgres"
                tags = [
                    "traefik.enable=false",
                ]
                check {
                    type     = "tcp"
                    interval = "10s"
                    timeout  = "2s"
                }
                connect {
                    sidecar_service {}
                }
            }
        }
    }

    group "cache" {
        network {
            port  "redis"{
                to = 6379
                host_network = "private"
            }
        }
        task "redis" {
            driver = "docker"
            config {
                image = "redis:alpine"
                ports = ["redis"]
            }
            service {
                name = "wallabag-redis"
                port = "redis"
                tags = [
                    "traefik.enable=false",
                ]
                check {
                    type     = "tcp"
                    interval = "10s"
                    timeout  = "2s"
                }
                connect {
                    sidecar_service {}
                }
            }
        }
    }

    group "cron" {
        volume "backup" {
            type = "host"
            source = "wallabag_backup"
            read_only = false
        }

        task "backup" {
            driver = "docker"
            config {
                image = "CRON_IMAGE"
                auth {
                    username = "REGISTRY_USER"
                    password = "REGISTRY_PASSWORD"
                    server_address = "REGISTRY_SERVER"
                }
            }
            env {
                WALLABAG_DB_HOST = "${NOMAD_UPSTREAM_IP_wallabag_postgres}"
            }
            volume_mount {
                volume      = "backup"
                destination = "/srv/backup"
            }
            service {
                name = "wallabag-cron"
                tags = [
                    "traefik.enable=false",
                ]
                connect {
                    sidecar_service {
                        proxy {
                            upstreams {
                                destination_name = "wallabag-postgres"
                                local_bind_port = 5432
                            }
                        }
                    }
                }
            }
        }
    }

    affinity {
        attribute = "${meta.usage}"
        value = "web"
        weight = "100"
    }
}

intentions between wallabag-app > wallabag-redis and wallabag-postgres have been set.

I run nomad 1.0.1 / consul 1.9.1 on 3 Debian 10 VM with a public and a private NIC.

Hi @nsteinmetz , the connect stanza is not valid within a task level service, only within group level services. The validation for this should be fixed in the next release of Nomad

Thanks @shoenig for this precision.

Should I move only the connect part at a group -> service level or the whole service block from task level to group level ?

Some progress but I can’t get for this one:

$ nomad job validate job.nomad.hcl
Job validation successful
$ nomad job run job.nomad.hcl
==> Monitoring evaluation "a169dae3"
    Evaluation triggered by job "wallabag"
==> Monitoring evaluation "a169dae3"
    Evaluation within deployment: "38c2b0b4"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "a169dae3" finished with status "complete" but failed to place all allocations:
    Task Group "front" (failed to place 1 allocation):
      * Constraint "missing drivers": 1 nodes excluded by filter
      * Constraint "missing host network \"default\" for port \"connect-proxy-wallabag-front\"": 2 nodes excluded by filter
    Task Group "cron" (failed to place 1 allocation):
      * Constraint "missing compatible host volumes": 1 nodes excluded by filter
      * Constraint "missing host network \"default\" for port \"connect-proxy-wallabag-cron\"": 1 nodes excluded by filter
      * Constraint "missing drivers": 1 nodes excluded by filter
    Task Group "cache" (failed to place 1 allocation):
      * Constraint "missing drivers": 1 nodes excluded by filter
      * Constraint "missing host network \"default\" for port \"connect-proxy-wallabag-cache\"": 2 nodes excluded by filter
    Task Group "back" (failed to place 1 allocation):
      * Constraint "missing compatible host volumes": 1 nodes excluded by filter
      * Constraint "missing host network \"default\" for port \"connect-proxy-wallabag-back\"": 1 nodes excluded by filter
      * Constraint "missing drivers": 1 nodes excluded by filter
    Evaluation "22fcc6fd" waiting for additional capacity to place remainder

Current job file:

job "wallabag" {
    datacenters = ["dc1"]

    group "front" {
        network {
            mode = "bridge"
            port  "http"{
                to = 80
                host_network = "private"
            }
        }

        task "app" {
            driver = "docker"
            config {
                image = "wallabag/wallabag:2.4.0"
                ports = ["http"]
            }
            env {
                SYMFONY__ENV__DATABASE_HOST = "${NOMAD_UPSTREAM_IP_wallabag_postgres}"
                SYMFONY__ENV__DATABASE_PORT = "${NOMAD_UPSTREAM_PORT_wallabag_postgres}"
                SYMFONY__ENV__REDIS_HOST = "${NOMAD_UPSTREAM_IP_wallabag_redis}"
                SYMFONY__ENV__REDIS_PORT = "${NOMAD_UPSTREAM_PORT_wallabag_redis}"
            }
            service {
                name = "wallabag-app"
                port = "http"
                tags = [
                    "traefik.enable=true",
                ]
                check {
                    type     = "http"
                    path     = "/"
                    interval = "120s"
                    timeout  = "2s"
                }
            }
        }
        service {
            connect {
                sidecar_service {
                    proxy {
                        upstreams {
                            destination_name = "wallabag-postgres"
                            local_bind_port = 5432
                        }
                        upstreams {
                            destination_name = "wallabag-redis"
                            local_bind_port = 6379
                        }
                    }
                }
            }
        }
    }

    group "back" {
        network {
            mode = "bridge"
            port  "postgres"{
                to = 5432
                host_network = "private"
            }
        }

        volume "pgdata" {
            type = "host"
            source = "wallabag_pgdata"
            read_only = false
        }

        task "postgres" {
            driver = "docker"
            config {
                image = "DB_IMAGE"
                auth {
                    username = "REGISTRY_USER"
                    password = "REGISTRY_PASSWORD"
                    server_address = "REGISTRY_SERVER"
                }
                ports = ["postgres"]
            }
            env {
            }
            volume_mount {
                volume      = "pgdata"
                destination = "/var/lib/postgresql/data"
            }
            service {
                name = "wallabag-postgres"
                port = "postgres"
                tags = [
                    "traefik.enable=false",
                ]
                check {
                    type     = "tcp"
                    interval = "10s"
                    timeout  = "2s"
                }
            }
        }
        service {
            connect {
                sidecar_service {}
            }
        }
    }

    group "cache" {
        network {
            mode = "bridge"
            port  "redis"{
                to = 6379
                host_network = "private"
            }
        }
        task "redis" {
            driver = "docker"
            config {
                image = "redis:alpine"
                ports = ["redis"]
            }
            service {
                name = "wallabag-redis"
                port = "redis"
                tags = [
                    "traefik.enable=false",
                ]
                check {
                    type     = "tcp"
                    interval = "10s"
                    timeout  = "2s"
                }
            }
        }
        service {
            connect {
                sidecar_service {}
            }
        }
    }

    group "cron" {
        network {
            mode = "bridge"
        }

        volume "backup" {
            type = "host"
            source = "wallabag_backup"
            read_only = false
        }

        task "backup" {
            driver = "docker"
            config {
                image = "CRON_IMAGE"
                auth {
                    username = "REGISTRY_USER"
                    password = "REGISTRY_PASSWORD"
                    server_address = "REGISTRY_SERVER"
                }
            }
            env {
                WALLABAG_DB_HOST = "${NOMAD_UPSTREAM_IP_wallabag_postgres}"
            }
            volume_mount {
                volume      = "backup"
                destination = "/srv/backup"
            }
        }
        service {
            name = "wallabag-cron"
            tags = [
                "traefik.enable=false",
            ]
            connect {
                sidecar_service {
                    proxy {
                        upstreams {
                            destination_name = "wallabag-postgres"
                            local_bind_port = 5432
                        }
                    }
                }
            }
        }
    }

    affinity {
        attribute = "${meta.usage}"
        value = "web"
        weight = "100"
    }
}

In nomad.hcl, I set network as follows:

client {
  enabled = true

  host_network "public" {
    cidr = "XX.XX.XX.XX/32"
  }
  host_network "private" {
    cidr = "10.0.3.122/16"
  }
}

If I remove the host_network from client configuratoin file and from job file, it almost works but I’m not happy with the fact that proxy open ports on public IP…

NOMAD_UPSTREAM_* variables are present like:

NOMAD_UPSTREAM_ADDR_wallabag_postgres=127.0.0.1:5432

but I can’t connect to this.

If I look in nomad for the postgres IP host and port, I can connect to postgres
If I look in nomad for the connect related service for postgres, I can’t connect to postgres.

I would have thought that mapped port would be in this case 5432 and not an exotical port

Ports
Name 	                        Host Address 	    Mapped Port
connect-proxy-wallabag-back 	XX.XX.XX.XX:26658 	26658
postgres 	                    XX.XX.XX.XX:23880 	5432

I tried to add a local_service_port as mentionned here : Port mapping with Nomad and Consul Connect - #2 by awarner-greshamtech but it seems without impacts.

ok seems also envoy16.0 had issue when parsing configuration… let’s see that over the week-end…

Enoy logs seems weird:

[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:305] initializing epoch 0 (base id=0, hot restart version=disabled)
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:307] statically linked extensions:
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.udp_listeners: quiche_quic_listener, raw_udp_listener
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.quic_client_codec: quiche
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.compression.compressor: envoy.compression.gzip.compressor
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.health_checkers: envoy.health_checkers.redis
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.thrift_proxy.transports: auto, framed, header, unframed
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.dubbo_proxy.protocols: dubbo
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.udp_packet_writers: udp_default_writer, udp_gso_batch_writer
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.gzip, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.squash, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.http.cache: envoy.extensions.http.cache.simple
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.upstreams: envoy.filters.connection_pools.http.generic, envoy.filters.connection_pools.http.http, envoy.filters.connection_pools.http.tcp
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, tls
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.tcp_grpc, envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.quic_server_codec: quiche
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.resolvers: envoy.ip
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.bootstrap: envoy.extensions.network.socket_interface.default_socket_interface
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.compression.decompressor: envoy.compression.gzip.decompressor
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.dubbo_proxy.route_matchers: default
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.dubbo_proxy.serializers: dubbo.hessian2
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.statsd
[2021-01-24 20:19:32.983][9][info][main] [source/server/server.cc:309]   envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2021-01-24 20:19:32.993][9][warning][misc] [source/common/protobuf/utility.cc:294] Configuration does not parse cleanly as v3. v2 configuration is deprecated and will be removed from Envoy at the start of Q1 2021: Unknown field in: {
  "admin": {
    "access_log_path": "/dev/null",
    "address": {
      "socket_address": {
        "address": "127.0.0.1",
        "port_value": 19001
      }
    }
  },
  "node": {
    "cluster": "wallabag-back",
    "id": "_nomad-task-1a1b5440-221b-82ae-d45f-34fbd8992f1d-group-back-wallabag-back--sidecar-proxy",
    "metadata": {
      "namespace": "default",
      "envoy_version": "1.16.0"
    }
  },
  "static_resources": {
    "clusters": [
      {
        "name": "local_agent",
        "connect_timeout": "1s",
        "type": "STATIC",
        "http2_protocol_options": {},
        "hosts": [
          {
            "pipe": {
              "path": "alloc/tmp/consul_grpc.sock"
            }
          }
        ]
      }
    ]
  },
  "stats_config": {
    "stats_tags": [
      {
        "regex": "^cluster\\.((?:([^.]+)~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.custom_hash"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:([^.]+)\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.service_subset"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.service"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.namespace"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.datacenter"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.routing_type"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.consul\\.)",
        "tag_name": "consul.destination.trust_domain"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.target"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+)\\.consul\\.)",
        "tag_name": "consul.destination.full_target"
      },
      {
        "regex": "^(?:tcp|http)\\.upstream\\.(([^.]+)(?:\\.[^.]+)?\\.[^.]+\\.)",
        "tag_name": "consul.upstream.service"
      },
      {
        "regex": "^(?:tcp|http)\\.upstream\\.([^.]+(?:\\.[^.]+)?\\.([^.]+)\\.)",
        "tag_name": "consul.upstream.datacenter"
      },
      {
        "regex": "^(?:tcp|http)\\.upstream\\.([^.]+(?:\\.([^.]+))?\\.[^.]+\\.)",
        "tag_name": "consul.upstream.namespace"
      },
      {
        "regex": "^cluster\\.((?:([^.]+)~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.custom_hash"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:([^.]+)\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.service_subset"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.service"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.namespace"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.datacenter"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.routing_type"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.consul\\.)",
        "tag_name": "consul.trust_domain"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.target"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+)\\.consul\\.)",
        "tag_name": "consul.full_target"
      },
      {
        "tag_name": "local_cluster",
        "fixed_value": "wallabag-back"
      },
      {
        "tag_name": "consul.source.service",
        "fixed_value": "wallabag-back"
      },
      {
        "tag_name": "consul.source.namespace",
        "fixed_value": "default"
      },
      {
        "tag_name": "consul.source.datacenter",
        "fixed_value": "dc1"
      }
    ],
    "use_all_default_tags": true
  },
  "dynamic_resources": {
    "lds_config": {
      "ads": {}
    },
    "cds_config": {
      "ads": {}
    },
    "ads_config": {
      "api_type": "GRPC",
      "grpc_services": {
        "initial_metadata": [
          {
            "key": "x-consul-token",
            "value": "c532b9b4-dc7d-2a38-03bb-dc44e91d43bb"
          }
        ],
        "envoy_grpc": {
          "cluster_name": "local_agent"
        }
      }
    }
  },
  "layered_runtime": {
    "layers": [
      {
        "name": "static_layer",
        "static_layer": {
          "envoy.deprecated_features:envoy.api.v2.Cluster.tls_context": true,
          "envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true,
          "envoy.deprecated_features:envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager.Tracing.operation_name": true
        }
      }
    ]
  }
}

[2021-01-24 20:19:32.994][9][warning][misc] [source/common/protobuf/message_validator_impl.cc:21] Deprecated field: type envoy.api.v2.Cluster Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cluster.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/version_history/version_history for details. If continued use of this field is absolutely necessary, see https://www.envoyproxy.io/docs/envoy/latest/configuration/operations/runtime#using-runtime-overrides-for-deprecated-features for how to apply a temporary and highly discouraged override.
[2021-01-24 20:19:32.995][9][info][main] [source/server/server.cc:325] HTTP header map info:
[2021-01-24 20:19:32.996][9][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
[2021-01-24 20:19:32.997][9][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
[2021-01-24 20:19:33.001][9][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
[2021-01-24 20:19:33.001][9][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
[2021-01-24 20:19:33.001][9][info][main] [source/server/server.cc:328]   request header map: 608 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
[2021-01-24 20:19:33.001][9][info][main] [source/server/server.cc:328]   request trailer map: 128 bytes: 
[2021-01-24 20:19:33.001][9][info][main] [source/server/server.cc:328]   response header map: 424 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[2021-01-24 20:19:33.001][9][info][main] [source/server/server.cc:328]   response trailer map: 152 bytes: grpc-message,grpc-status
[2021-01-24 20:19:33.005][9][info][main] [source/server/server.cc:448] admin address: 127.0.0.1:19001
[2021-01-24 20:19:33.006][9][info][main] [source/server/server.cc:583] runtime: layers:
  - name: static_layer
    static_layer:
      envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1: true
      envoy.deprecated_features:envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager.Tracing.operation_name: true
      envoy.deprecated_features:envoy.api.v2.Cluster.tls_context: true
[2021-01-24 20:19:33.007][9][info][config] [source/server/configuration_impl.cc:95] loading tracing configuration
[2021-01-24 20:19:33.007][9][info][config] [source/server/configuration_impl.cc:70] loading 0 static secret(s)
[2021-01-24 20:19:33.007][9][info][config] [source/server/configuration_impl.cc:76] loading 1 cluster(s)
[2021-01-24 20:19:33.109][9][info][config] [source/server/configuration_impl.cc:80] loading 0 listener(s)
[2021-01-24 20:19:33.109][9][info][config] [source/server/configuration_impl.cc:121] loading stats sink configuration
[2021-01-24 20:19:33.109][9][info][runtime] [source/common/runtime/runtime_impl.cc:421] RTDS has finished initialization
[2021-01-24 20:19:33.109][9][info][upstream] [source/common/upstream/cluster_manager_impl.cc:174] cm init: initializing cds
[2021-01-24 20:19:33.110][9][warning][main] [source/server/server.cc:565] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
[2021-01-24 20:19:33.113][9][info][main] [source/server/server.cc:679] starting main dispatch loop
[2021-01-24 20:19:33.119][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:19:33.515][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:19:33.872][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:19:34.563][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:19:35.571][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:19:41.921][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:19:48.110][9][info][upstream] [source/common/upstream/cluster_manager_impl.cc:178] cm init: all clusters initialized
[2021-01-24 20:19:48.110][9][info][main] [source/server/server.cc:660] all clusters initialized. initializing init manager
[2021-01-24 20:19:49.938][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:20:02.601][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:20:03.114][9][info][config] [source/server/listener_manager_impl.cc:888] all dependencies initialized. starting workers
[2021-01-24 20:20:17.112][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:20:21.491][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:20:45.324][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:20:51.525][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:21:01.125][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:21:11.401][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:21:25.876][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:21:41.002][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:22:04.697][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:22:19.412][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:22:43.424][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:22:46.114][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:23:15.537][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:23:35.080][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2021-01-24 20:23:39.369][9][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination

Seems this open issue may be related about the deprecated fields : Fix deprecated envoy name warnings · Issue #8425 · hashicorp/consul · GitHub

Giving up with consul connect so far unless someone has some idea - will reevaluate it later.

Will just use nomad & consul without connect.