Terraform-ls - dealing with language server as a daemon

After the port feature has been brought to terraform-ls, I’ve been trying to use it like a daemon, with respect to maximizing req-concurrency - this way I can use in multiple windows.
However, with respect to the Visual Studio Code extension - it does not seem to have a proper configuration to lead with communicating to the language server through the TCP port and only through it.

Is there a way this could be viable?


Hi @marcus,

There is an experimental setting in the extension for this:

  "terraform.languageServer.tcp.port": 13373,

We have yet to document it because we have yet to consider this functionality production-ready and are primarily using it for debugging purposes.

You can give it a try and see if it works for you, but you’ve been warned :wink:

Do you mind sharing what you’ve planned concerning maximizing req-concurrency? Instead of using multiple windows, VS Code offers workspace support, which allows you to open numerous folders while sharing a single language server instance. That would be our preferred and more stable solution than specifying a TCP port.

Thanks a lot for the reply.
I’ve found that configuration parameter while digging the source code yesterday.

The idea is to share the language server between our VS Codes that run in Windows.
As we all know, the Go binaries generated for terraform-ls and terraform are extremely slow in comparison to ELF.

So, we’re somewhat of playing with VSCode + tunneling and some nifty tricks (to share the same language server between various people).

You have no idea of how fast it is compared to running the language server and even terraform itself in plain Windows binary.

The req-concurrency is set at 30 now.

Can you expand on the “extremely slow” here? i.e. in what context is it slow? Also what do you mean by “ELF”?

I have never seen LSP used in the context where the server is used by multiple users. Although it’s probably technically possible, there are likely some privacy/security aspects which the LSP as a protocol generally doesn’t account for - and hence neither does our server. We don’t do any kind of intentional data separation (except what is necessary in terms of session separation), so you implicitly have to fully trust anyone who you share the server with.

In principle though we do support the concept of “sessions” which should allow multiple clients to talk to the same server. We have never considered that these sessions would be owned by different users however as that’s not a use case LSP documents either AFAIK.

There are a few reasons we keep the concurrency conservatively low at 2:

  • higher concurrency is unlikely to be useful for a single user, who would very rarely generate enough requests at the same time which would actually benefit from higher parallelism (i.e. typically a single human won’t be editing more than one file at a time).
  • LSP declares, that many requests need to be processed in the same order they arrived (e.g. textDocument/didChange), so that pretty much implies low concurrency anyway, at least per session.

We are generally open to making performance improvements! So if you can describe the scenario in which the language server or the client (VS Code extension) are slow and we can make it faster [for everyone], we would be happy to look into that.

We do not expect our users to need to custom-compile the language server or have to tweak the req-concurrency flag – certainly not for performance reasons. It is there mainly for debugging purposes.

Let’s just establish something here, the focus is not to discuss, point right or wrong, and I’m here only to learn from you guys.

The daemon apeal was just a ‘kids idea’ we had to play with it - not something that we will run and spread to be used throughout production.

Related to terraform being slow in comparion between Windows and Linux, just bring up a full VSCode with hashicorp extension and terraform+terraform-ls binaries running in Windows.

Do some terraform validate, fmt, (format-on-save might help), you will see what I’m talking about.

My experience with that was so fiercely painful that I felt I was forced to bring up a WSL environment to run everything in the WSL, except for the VS Code.

Not joking, that’s something that really requires attention.

I am vaguely aware that on Windows, IPC and generally “external command execution” can be slower compared to other platforms. Admittedly, I don’t experience this myself, as I’m not a regular Windows user, so I apologise for any implied lack of empathy or knowledge in this area. :sweat_smile:

I’d expect this to only impact the formatting (when we run terraform fmt -) or the “validate” command execution (when we run terraform validate). I would not expect that to impact the experience as whole, unless you enabled “formatting on type”, “validation on type”, or both.

As for Terraform itself, there’s still IPC involved when Terraform talks to each plugin/provider (which runs as a separate process). At this point, it seems highly unlikely that this design would change any time soon, but I don’t want to make assumptions as I’m not maintaining the Core, or go-plugin (the plugin system which effectively implies IPC). Either way though, I would expect this to have relatively minor impact, unless you use a lot of providers (which implies a lot of IPC chattery). In general, I’d assume that more time during apply or destroy is spent on waiting for the remote APIs to do something (e.g. finish creating a hosted database cluster) rather than waiting for a plugin process to launch.

I do admit, that I have not yet read anything about why the IPC is an issue on Windows and what can we do about at as maintainers - aside from not using it and bundling all functionality within the same binary (i.e. either bundling whole LS into Terraform, or re-implementing validate and format within LS) or nudging more people towards WSL.

I’d welcome any hints and ideas.

FWIW VS Code supports WSL and we’d recommend using it on Windows: Developing in the Windows Subsystem for Linux with Visual Studio Code