Kubernetes
13 min readMay 9, 2026

Docker Swarm vs Kubernetes vs Nomad: Choosing Your Container Orchestrator

Three container orchestrators, three different opinions on what orchestration should cost you. Docker Swarm is dead. Kubernetes is the default. Nomad is the one worth knowing about before you default. Here's the real comparison.

CO
Coding Protocols Team
Platform Engineering
Docker Swarm vs Kubernetes vs Nomad: Choosing Your Container Orchestrator

Container orchestration has a clear hierarchy in 2026: Kubernetes is the default, Docker Swarm is effectively abandoned, and Nomad is the underdog that earns its place in specific contexts. If you're evaluating orchestrators, the honest answer is that for most teams the decision is already made — but "most teams" isn't all teams.

This post is for the teams where the decision isn't obvious: you're not a 3-person startup (Compose works), you're not a FAANG-scale platform team (Kubernetes obviously), and you want to understand what you're actually choosing between before you commit.


Docker Swarm

What It Is

Docker Swarm is Docker's built-in clustering mode. You initialise a Swarm manager, join worker nodes, and deploy stacks using Docker Compose-compatible syntax. docker stack deploy is the deploy command. The mental model is: Docker Compose, but across multiple machines.

bash
docker swarm init
docker swarm join --token <token> <manager-ip>:2377

docker stack deploy -c docker-compose.yml myapp

What Works

The operational simplicity is real. If your team knows Docker Compose, they know Swarm. The concepts are identical: services, networks, volumes, secrets. There's no new API server, no CRDs, no admission controllers, no Helm. A three-node Swarm cluster can be running in 10 minutes.

For teams with a single application, a small number of services, and no need for autoscaling or multi-tenancy, Swarm delivered everything required with a fraction of Kubernetes' complexity.

Why It's Dead

Docker Swarm has not received meaningful new features since 2019. The Docker company deprioritised it after Mirantis acquired Docker Enterprise in 2019. Swarm is still functional — it hasn't been removed — but it's in maintenance mode with no roadmap.

The practical consequences:

  • No Karpenter-style autoscaling — Swarm has no concept of adding nodes based on workload demand
  • No ecosystem — the cloud-native tooling (Prometheus, Argo CD, Istio, Kyverno, Cert-Manager) all target Kubernetes APIs; none of it works with Swarm
  • No managed offerings — AWS, Azure, and GCP all discontinued managed Swarm services. You're self-managing.
  • No GPU scheduling, no WASM support, no COSI, no Gateway API
  • Hiring — platform engineers with Swarm expertise are increasingly rare; Kubernetes is the industry standard on CVs

The verdict: do not start new projects on Docker Swarm. If you're running Swarm today, plan a migration. The migration target is almost always Kubernetes — see Kubernetes vs Docker Compose: When to Use Which for the migration path logic.


Kubernetes

What It Is

Kubernetes is a distributed system for scheduling containerised workloads across a cluster of machines. It is the CNCF graduated project, the platform that every managed cloud Kubernetes service (EKS, AKS, GKE) is built on, and the ecosystem anchor for the cloud-native landscape.

Kubernetes provides:

  • Declarative API — desired state; the control plane reconciles reality to match
  • Workload primitives — Deployment, StatefulSet, DaemonSet, Job, CronJob
  • Service discovery — DNS-based, with Service and Ingress abstractions
  • Autoscaling — HPA, VPA, KEDA, Karpenter
  • Extensibility — CRDs, admission webhooks, operators — Kubernetes can be extended to manage any stateful resource
  • RBAC, NetworkPolicy, PodSecurity — multi-tenant security model
  • Storage — CSI drivers for block, file, and object storage

Where It Wins

Ecosystem. The CNCF landscape exists because of Kubernetes. Argo CD, Flux, Istio, Linkerd, Cert-Manager, External Secrets Operator, Velero, Kyverno, Falco — every tool in modern platform engineering assumes a Kubernetes control plane. If you're running Kubernetes, you can pick from the best tool for each job. If you're not, you're building those capabilities yourself.

Managed offerings. EKS, AKS, GKE, DOKS, Linode LKE — every major cloud provides managed Kubernetes. The control plane is handled; you pay for nodes. This removes the hardest part of self-managing Kubernetes (etcd backup, control plane HA, version upgrades on the API server).

Industry standard. Your next hire knows Kubernetes. Your consultants know Kubernetes. StackOverflow answers are for Kubernetes. The documentation is for Kubernetes. The CNCF Kubernetes certification (CKA, CKAD, CKS) exists. This is a real operational advantage that compounds over time.

Where It Costs You

Complexity. A production-ready Kubernetes cluster requires decisions and operational investment before you deploy a single workload: networking (CNI), storage (CSI), ingress controller, certificate management, secrets management, RBAC design, cluster autoscaling, observability stack, and backup. Each of these is a separate system with its own configuration surface.

Non-container workloads. Kubernetes is designed for container workloads. Running VMs, batch jobs on bare-metal processes, or non-containerised legacy applications requires workarounds (KubeVirt for VMs, Nomad-style raw exec — see below).

Small teams. For a 2–3 person team without dedicated platform engineering capacity, Kubernetes' operational overhead is real. Managed Kubernetes (EKS, GKE) removes a lot of it, but you still need someone who understands Kubernetes to operate it.


Nomad

What It Is

Nomad is HashiCorp's workload orchestrator. It's fundamentally different from Kubernetes in philosophy: Nomad is a general-purpose task scheduler, not a container-first platform.

Where Kubernetes has a rich set of workload types (Deployment, StatefulSet, DaemonSet), Nomad has a single primitive: the job. A job contains task groups, which contain tasks. Tasks can be:

  • Docker containers (driver = "docker")
  • Podman containers (driver = "podman")
  • Raw executables (driver = "raw_exec") — run a binary directly on the host
  • Java JARs (driver = "java")
  • QEMU VMs (driver = "qemu")
  • Systemd services via the exec driver
hcl
1job "api" {
2  datacenters = ["dc1"]
3  type = "service"
4
5  group "web" {
6    count = 3
7
8    network {
9      port "http" { to = 8080 }
10    }
11
12    task "api" {
13      driver = "docker"
14
15      config {
16        image = "myapp:v1.2.3"
17        ports = ["http"]
18      }
19
20      resources {
21        cpu    = 500
22        memory = 256
23      }
24    }
25  }
26}

Nomad uses HCL (HashiCorp Configuration Language), not YAML. The job spec is more concise than equivalent Kubernetes manifests.

Where Nomad Wins

Mixed workload types. If you're running containers alongside raw binaries, JVM processes, or legacy applications, Nomad schedules all of them with a single control plane. Kubernetes can run non-container workloads but requires significant workarounds. Nomad treats all workload types as first-class.

Simplicity at small-to-medium scale. A three-node Nomad cluster (one server, two clients) is genuinely simpler to operate than an equivalent Kubernetes cluster. No etcd to manage separately (Nomad uses Raft internally), no API server overhead, no admission webhook infrastructure. The Nomad agent is a single binary.

HashiCorp ecosystem. Nomad integrates natively with Consul (service mesh and service discovery), Vault (secrets injection), and Terraform (infrastructure provisioning). If your organisation is already on this stack, Nomad is the natural scheduler.

Federation. Nomad has first-class multi-datacenter and multi-region federation built in. A single Nomad cluster can span multiple datacenters. Kubernetes multi-cluster federation is an ongoing ecosystem problem with no single authoritative solution (KubeFed, Karmada, Liqo, Admiralty — none are as clean as Nomad's native federation).

Performance at scale. Nomad's scheduling throughput is genuinely impressive — benchmarks show scheduling tens of thousands of allocations per second on a single server. Kubernetes' scheduler is performant but more complex. For batch workloads with many short-lived tasks, Nomad's scheduler overhead is lower.

Where Nomad Falls Short

Ecosystem. This is the inverse of Kubernetes' advantage. The CNCF tooling doesn't support Nomad. Argo CD doesn't deploy to Nomad. Prometheus has a Nomad exporter, but native operator-pattern integrations don't exist. Cert-Manager, External Secrets Operator, Kyverno — none of these work with Nomad. You build or buy equivalents.

Stateful workloads. Nomad has volume support (CSI, host volumes) but lacks Kubernetes' StatefulSet semantics — ordered deployment, stable network identity, per-pod PVCs. Running databases in Nomad is possible but requires more manual orchestration. StatefulSets exist in Kubernetes specifically because the general scheduler needed these guarantees; Nomad's general-purpose model doesn't have an equivalent.

No admission control framework. Kubernetes has a mature webhook admission architecture that tools like Kyverno and OPA/Gatekeeper plug into. Nomad has Sentinel (enterprise-only) for policy enforcement. The open-source Nomad policy story is thin.

Kubernetes momentum. The industry has consolidated on Kubernetes. New tools target Kubernetes first and Nomad as an afterthought or never. If you adopt Nomad, you are accepting a narrower tooling ecosystem and a smaller talent pool for the foreseeable future.

BSL licensing (2023). HashiCorp relicensed Nomad (along with Terraform and Consul) from MPL 2.0 to the Business Source License in 2023. BSL prohibits using the software to compete with HashiCorp. For most internal use cases this doesn't matter, but it's a concern for any commercial product built on Nomad, and it has driven some organisations to evaluate OpenTofu/OpenNomad forks.


Side-by-Side Comparison

Docker SwarmKubernetesNomad
StatusMaintenance onlyActive, industry standardActive (BSL licensed)
Setup complexityLowHighMedium
Non-container workloadsNoVia workaroundsYes (first-class)
Managed offeringsNoneEKS, AKS, GKE, etc.HCP Nomad (HashiCorp)
AutoscalingNoYes (HPA, KEDA, Karpenter)Basic (Nomad Autoscaler)
Multi-cluster federationNoComplex ecosystemNative
Ecosystem (CNCF tooling)NoneComprehensiveLimited
Stateful workload supportLimitedStrong (StatefulSet)Functional, no equivalents
Policy enforcementNoneKyverno, OPA/GatekeeperSentinel (enterprise only)
LicenseApache 2.0Apache 2.0BSL 1.1
Best forDon't useMost production workloadsMixed workloads, HashiCorp shops

How to Choose

Choose Kubernetes if:

  • You're building a platform that will run for multiple years
  • You need the CNCF ecosystem (Argo, Istio, Kyverno, etc.)
  • You're using a managed cloud platform (EKS, AKS, GKE removes the operational burden)
  • Your workloads are container-first
  • You need to hire engineers who already know the platform

Choose Nomad if:

  • You have genuine mixed workload requirements (containers + VMs + raw binaries)
  • You're already on HashiCorp stack (Vault + Consul + Terraform) and want native integration
  • You need multi-datacenter federation without the complexity of Kubernetes multi-cluster
  • You're running high-throughput batch workloads where Nomad's scheduler performance matters
  • You have a small platform team that can't absorb Kubernetes' operational surface

Don't choose Swarm: Full stop. If you're on Swarm, migrate to Kubernetes or at minimum to a managed container service (ECS, Cloud Run). Swarm's feature gap versus Kubernetes will only grow.


Frequently Asked Questions

Is Nomad dead after the BSL relicense?

No. HashiCorp (acquired by IBM in 2024) continues to develop Nomad. The BSL licensing affects organisations building commercial products on top of Nomad, not internal use. However, the relicense accelerated the community evaluation of Kubernetes as an alternative — the question "why use Nomad over Kubernetes?" became harder to answer after the license change.

Can I run Nomad alongside Kubernetes?

Yes — some organisations run both. A pattern: Kubernetes for containerised production services, Nomad for mixed/batch workloads, sharing the same Consul service mesh for discovery. This works but introduces a dual control plane to operate.

What replaced Docker Swarm?

For teams that wanted Swarm's simplicity, the realistic alternatives are: managed container services (AWS ECS, Google Cloud Run, Azure Container Apps) for simple deployments, or lightweight Kubernetes distributions (k3s, k0s) for teams that want Kubernetes primitives without the full complexity. k3s is a common landing spot for Swarm migrants — it's genuinely simple to install and run.

Is k3s a viable option here?

k3s (Rancher/SUSE) is a CNCF-certified Kubernetes distribution packaged as a single binary with sensible defaults. It's not a separate orchestrator — it's Kubernetes with SQLite (instead of etcd) as the default backend for clusters up to ~100 nodes, stripped of cloud-provider-specific code. If Kubernetes' setup complexity is the barrier, k3s removes it. All Kubernetes tooling works on k3s unchanged.

What about OpenNomad / OpenTofu equivalent for Nomad?

OpenTofu (Terraform fork, MPL 2.0) is mature and actively developed. An equivalent community fork for Nomad doesn't yet have the same momentum — the community that left HashiCorp over the relicense mostly moved to Kubernetes rather than forking Nomad. Watch this space, but don't depend on it.


For the full Kubernetes vs Docker Compose decision guide, see Kubernetes vs Docker Compose: When to Use Which. For managed Kubernetes platform comparisons, see EKS vs AKS: A Production Engineer's Comparison.

Choosing between orchestrators for a production platform? Talk to us at Coding Protocols — we've helped teams make this decision and live with the consequences.

Related Topics

Kubernetes
Docker Swarm
Nomad
Container Orchestration
Platform Engineering
DevOps
HashiCorp

Read Next