Hi there,
I’m Hieu from Verda. We’re a European GPU cloud provider. We recently developed a Terraform/OpenTofu provider for Verda so users can manage Verda resources via infrastructure-as-code instead of manual steps. Verda resources include GPU compute, storage, containers, and related infrastructure.
To validate the provider in a real scenario, we battle-tested it with an example that deploys DeepSeek-R1 (NVFP4) running SGLang across 4× NVIDIA B300 SXM6 nodes with local NVMe. The goal was to show a full, end-to-end Terraform configuration that actually provisions infrastructure and runs a workload.
The example covers:
-
configuring the Verda provider in Terraform
-
defining multi-GPU compute resources
-
attaching local NVMe storage
-
provisioning and deploying SGLang workloads
-
running a reproducible benchmark
We published the example so others can see how to structure similar configurations. The provider works with both Terraform and OpenTofu.
We’re sharing it here in case anyone is building or consuming custom providers and wants a non-trivial example of infrastructure and workload managed through Terraform.