Synchronization Service Discovery catalog between servers & clients in 1 DC

Posted originally by 1Const1

Overview of the Issue

Synchronization Service Discovery catalog between servers & clients in 1 DC

Reproduction Steps

Hello

I didn’t find any information that described clearly this moment and in official community or in google groups some questions about it have no answer so I have no choice to write here. i don’t know bug this or no but i need to know

So my problem:
My cluster configuration is like this:

Node           Address            Status  Type    Build  Protocol  DC     Segment
consulserver1  192.168.0.89:8301  alive   server  1.7.3  2         mydc1  <all>
consulserver2  192.168.0.90:8301  alive   server  1.7.3  2         mydc1  <all>
consulserver3  192.168.0.92:8301  alive   server  1.7.3  2         mydc1  <all>
consulagent1   192.168.0.91:8301  alive   client  1.7.3  2         mydc1  <default>

So I register my service on one of servers consul or client with request like this PUT /v1/agent/service/register?token=:

{
	"ID": " myservice -192-168-0-120-656158",
	"Name": "myservice",
	"Tags": ["version\u003d1.0", "secure\u003dfalse"],
	"Address": "192.168.0.120",
	"Port": 58749,
	"Check": {
		"Interval": "2s",
		"HTTP": "http://192.168.0.120:58749/actuator/health",
		"Header": {},
		"Timeout": "10s",
		"DeregisterCriticalServiceAfter": "40s"
	}
}

I can send this request to the leader consulserver1 or to the agent consulagent1. Ok I have 200 and my service I can see successfully in this node that it’s there.

But I use cluster. So I’m expecting if I register my service in consulserver1 then on all other nodes of cluster will synchronized info about my service and next time if consulserver1 goes down than I can use consulserver2 or consulserver3 for discovery information about service that was register on consulserver1. But I can’t.
If I register service on servernode1 ever or clientnode1 the same node and server provides discovery about my service, but other nodes of cluster nothing know about service because it that was registered on other cluster node or client node. Is this correct behavior?

So I’m expecting that if I register service on agent or any server nodes and service passes heath check, than all cluster nodes nodes will be know about my service and sync service catalog between each other no matter on what node of cluster I register my service. Now it’s looks like I register service1 on consulserver1 then I register service2 on consulserver2 and service2 will call service1 so will be an error that service1 can/t be found because consulserver2 don’t know service1 that was register on consulserver1.

Additional info:

agent:
	check_monitors = 0
	check_ttls = 0
	checks = 0
	services = 0
build:
	prerelease = 
	revision = 8b4a3d95
	version = 1.7.3
consul:
	acl = disabled
	bootstrap = false
	known_datacenters = 1
	leader = true
	leader_addr = 192.168.0.89:8300
	server = true
raft:
	applied_index = 35738
	commit_index = 35738
	fsm_pending = 0
	last_contact = 0
	last_log_index = 35738
	last_log_term = 1664
	last_snapshot_index = 32775
	last_snapshot_term = 1664
	latest_configuration = [{Suffrage:Voter ID:0e94646a-567f-1eb3-7579-fd7c0ba7d8e8 Address:192.168.0.90:8300} {Suffrage:Voter ID:af1d9b30-c60a-e1c9-97cf-06f8c426a121 Address:192.168.0.92:8300} {Suffrage:Voter ID:21b19af5-44a3-7abe-e0ce-625f289016f2 Address:192.168.0.89:8300}]
	latest_configuration_index = 0
	num_peers = 2
	protocol_version = 3
	protocol_version_max = 3
	protocol_version_min = 0
	snapshot_version_max = 1
	snapshot_version_min = 0
	state = Leader
	term = 1664
runtime:
	arch = amd64
	cpu_count = 2
	goroutines = 132
	max_procs = 2
	os = linux
	version = go1.13.7
serf_lan:
	coordinate_resets = 0
	encrypted = true
	event_queue = 0
	event_time = 24
	failed = 0
	health_score = 0
	intent_queue = 0
	left = 0
	member_time = 24
	members = 4
	query_queue = 0
	query_time = 1
serf_wan:
	coordinate_resets = 0
	encrypted = true
	event_queue = 0
	event_time = 1
	failed = 0
	health_score = 0
	intent_queue = 0
	left = 0
	member_time = 10
	members = 3
	query_queue = 0
	query_time = 1 

Operating system and Environment details

Linux Ubuntu 18.04

Hi @1Const1,

Thanks for posting. I’ve moved this to the Discuss Forum, as you’re likely to get a better reply here than on our issues page.

First, have you taken a look at the Consul Architecture documentation and the How to Register a Service guide? These go through the concepts and steps to get your service registered correctly.

I wanted to address a few of your comments in line, as well.

So I’m expecting if I register my service in consulserver1 then on all other nodes of cluster will synchronized info about my service and next time if consulserver1 goes down than I can use consulserver2 or consulserver3 for discovery information about service that was register on consulserver1

So, this is how Consul works today. In the architecture doc, you’ll see the request path for how Consul communicates. A service is registered with its local Consul client. This request is forwarded to a Server, and if the Server is NOT the leader, the Server forwards the request to the leader. This way, if consulserver3 becomes leader after a failure of consulserver1, consulserver3 will respond with the catalog information. You can see this in the Querying a Service section of the guide. The catalog is what needs to be queried to discover services.

Please let me know if this answers your question.
Happy coding!

thank you for answer. for summery client application should send request always to consul client (not server) and than client take responsibility how to transfer request to server or server leader.

because if request goes from client application directly to consul server (no consul client between client app & consul server) than i discoverer that discovery information will be always available only on one consul server and not be transfer to others.
Correct me if i’m wrong, but in this case consul server should not approve directly request from client app (that use api PUT /v1/agent/service/register?token=:) and always communicate only with consul client. Now you can send requests from client app to consul client or consul server with success result (should be 200ok only for consul client to PUT /v1/agent/service/register?). for consul server direct access to API PUT /v1/agent/service/register? forbidden from other client app