I think the default build packs (we don’t maintain these, these are Cloud Native Buildpacks) expect a web process… I think. To use a purely non-web container with buildpacks it may require a custom buildpack. Someone who is more familiar with buildpacks may be able to respond.
If you use the plain Docker builder then it should work, although features such as the URL service won’t work but this makes sense.
Do you have multiple processes you want to run? If it’s just the one process, then this example worked for me without listening to a web port. You shouldn’t need a Procfile if you only have one process type.
Another tip for playing with buildpacks, try using the pack CLI as shown here.
Yeah, I can see how to get it working with a custom buildpack or just bringing my own Dockerfile, but was trying to see how far I could get with the off-the-shelf buildpack.
That said, if it’s not intended to work without a web process, that’s fine, just a little unexpected I guess? Given that the buildpack allows processes other than “web” as part of its API.
Looking at the pack documentation (thank you, @jbayer!) , it seems like what I want to be able to do is set default-process in the Waypoint pack stanza.
Or, even easier, just specify the docker command to run after building the image to run a non-default process.
Yup, we were laser focused on HTTP workloads for the 0.1 so we plan to start extending from there to non-web. Is the service that you’re deploying TCP based? Or is it not something that directly consumes client requests of any kind?
@evanphx Yeah, so I’ve been working on systems composed out of services consuming messages off of NATS queues, so they don’t accept any TCP connections other than connecting to the broker.
It’s possible that I’d still do health checks through HTTP, especially if I’m already spinning up a prometheus endpoint or something, but I definitely wouldn’t need/want a kubernetes Service resource since I don’t intend to expose that at all.
Makes sense to focus on HTTP for now, though. Nice work on this!
So, thinking this through there are a few bits to consider:
Flexibility of liveness probes. This is pretty easy, we could expose exec or even the ability to just turn it off, not a big issue.
Disable Service object. This isn’t a huge deal though it means effectively having no release functionality either because there is no Service object to pivot between.
Probably manipulating one Deployment object for rollout, rather than creating a new one. Right now Kubernetes creates a new Deployment object for each waypoint deploy, to allow the release phase the ability to point between 2 independent sets. For your use case, I think you’d not want that and instead want just one Deployment object that gets manipulated on each waypoint deploy.
For #3, the default deployment behavior is probably fine, even if HTTP traffic isn’t being routed there, especially if the rollout behavior is configurable.
I could see release process that does something like:
Spin up a new deployment
Check some metrics to make sure message ratios remain the same or that there’s not a spike in exceptions or error messages
Continue with deploy or rollback depending on result of #2
That said, there are other projects that do this (Argo Rollouts, for example), so maybe that’s not really the responsibility of Waypoint.
Can we revisit this topic since some time has passed? I have a node.js web app I’m deploying using waypoint using the pack + ecs plugins. I am now adding background worker functionality using graphile/worker. I’m not sure what my options are for adjusting my waypoint usage to support this.
Ideally, our waypoint up would create both web and worker ECS tasks using the one image and be able to independently control the number of instances of the web tasks and the worker tasks.
Is there a suitable way to accomplish this today with waypoint?