I would tend to discourage colocating Nomad servers and Clients except for in the most resource constrained proof of concept setups. Running your application workload on the same nodes as your Nomad servers could unexpectedly deprive them of necessary resources.
If I were looking at colocation of functions for a non-production cluster, I would consider colocating Vault, Nomad servers, and Consul servers, but keeping Nomad client nodes separate. While you will get some disk competition between Consul, Vault (if using integrated storage or more Consul activity if using Consul for storage), and Nomad, an errant workload will not have the possibility to create issues for the servers themselves.
Going below the recommended layout introduces more risk in failure cases, and can introduce cases that might be harder to debug. For example, using a Consul server as the Consul agent that Nomad talks to prevents that Nomad node from switching to a healthy instance if Consul itself becomes unwell. When you are using dedicated Consul agents (not ones running as servers), the client can potentially switch to a healthy instance in these cases
In my opinion, it is better to sacrifice power on your compute nodes than count for small deployments. For example, I have a lab of virtual machines that I run Nomad on for testing and development. It’s 12 instances; however, all of my Vault and Nomad servers run on 512mb of RAM and 1 VCPU. I have more RAM for Consul because I use it to back Vault, but I still only provide it with 1GB. Finally, my clients are scaled for some larger workloads so they have more vCPUs and RAM.
There are many opinions out there about how you can run smaller clusters; however, the way we describe fault tolerance and high availability around node failures is predicated on reducing the blast radius by minimizing the numbers of colocated services.