Some questions about waypoint project config

First off, congrats! Waypoint seems like a really interesting project, I’m really excited by the potential of separating the deploy/release phases!

I haven’t had much of a chance to dive into the code beyond skim reading some RPCs, but it seems that at the moment the waypoint configuration for a project is stored locally on the developer’s machine, and the waypoint CLI instructs the server to enqueue specific jobs based on the contents of the local config file?

I think I remember some mention during hashiconf digital that waypoint will eventually allow folks to perform deploys/releases through an API or slack. Would each of these clients need to have a local copy of the project’s waypoint configuration, or would waypoint get the config by some other means? e.g. would the build step upload the waypoint configuration that was in use at that moment, and that should be used by all subsequent operations for that build?

On a related note, do you folks have plans to allow the server to have some control over project configuration?

I agree that moving an application’s deployment configuration closer to the code is a good thing, but I’m worried that having to maintain each of these config files individually could make it more difficult to standardise deployment pipelines. If there are parts of your deployment workflow that you want every project to implement, how would you ensure that’s happening without manually auditing every config file?

As an example, the documentation for hooks demonstrates how hooks can be used to record an audit log of actions, but if this audit log relies on an entry in every application’s waypoint config file it’ll be quite difficult to verify that the audit log is covering all projects correctly without inspecting the config file for each and every project. Another situation where you might need to enforce consistency is ensuring all projects are using a specific version of a plugin.

These are great questions! We support the ability to have the code for the application, and thus it’s configuration as well, fetched by the Runner rather than managed by a smart client (like the CLI). This work is pretty basic right now and that’s why we didn’t get it into much yet in the presentations or docs.

The way that it works is that you can tell the runner that application code and configuration is remote with a URL for it to get it from. Right now I believe that only supports git. This means that the configuration continues to live along side the application and thin clients don’t need to have access to the code nor configuration to queue jobs.

Thanks for the info Evan, I’m looking forward to seeing how the runner stuff turns out!

I was just wondering if you any thoughts on the second part of my question, on whether the server might be able to enforce consistency or conventions between projects without requiring a developer to audit all the project config files?

Could you give me an example of something you’d want to enforce?

Most of the ones that spring to mind are about ensuring all projects have hooks to update external systems, e.g. “notify our monitoring tooling before/after each deploy”, “create a release in bugsnag”.

Some of the other situations I can think of are:

  • a plugin we use to has a known bug, we need to pin to the previous working version until a fix has been released
  • there’s a security issue in a specific version of the golang/ruby runtime, we want to ensure all projects are building artefacts with a safe version of the runtime
  • we’re changing our conventions for how we tag project artefacts, and want to ensure all projects are doing it consistently

I guess most of these use cases boil down to “allow the server to hook into different stages of the process and possibly prevent a stage from being executed/completed”?