Deploying a few microservices with Docker is a solved problem. You build an image, you run it, and you’re done. But what happens when “a few” turns into a dozen? The manual process of ssh and docker run across different servers quickly becomes a major bottleneck. Deployments become stressful, error prone, and slow.

This is the exact situation my team faced when our project grew to twelve interdependent services. Coordinating updates, managing resources, and ensuring everything started in the right order was a full time job. We needed an orchestrator.

Our first thought was Kubernetes. It’s the industry standard, powerful, and has a massive community. But after a quick look, we realized it was overkill. The complexity, the sheer number of concepts to learn, and the operational overhead felt like using a sledgehammer to crack a nut. We needed something simpler. That’s when we found Hashicorp Nomad. It promised the core scheduling and orchestration we needed without the steep learning curve, making it a perfect fit.

A Simpler Approach to Orchestration

Nomad’s philosophy is all about simplicity and flexibility. It ships as a single binary that can act as a server or a client. This makes setting up a cluster surprisingly straightforward.

  • Servers (or the Control Plane): These nodes are the brains of the operation. They manage the cluster state, decide where to place your applications, and handle failures. For high availability, you run three or five of these to prevent a single point of failure.
  • Clients (or Worker Nodes): These are the workhorses. They are the machines that actually run your applications, whether they are Docker containers, standalone binaries, or even virtual machines.

Let’s set up a minimal, single node cluster on our local machine to see it in action. You’ll need to have Nomad and Docker installed.

First, create a server configuration file named server.hcl.

// server.hcl
data_dir  = "/tmp/nomad/server"
bind_addr = "0.0.0.0"

server {
  enabled          = true
  bootstrap_expect = 1 // We expect only one server in this demo cluster
}

Next, create a client configuration file named client.hcl.

// client.hcl
data_dir = "/tmp/nomad/client"

client {
  enabled = true
  servers = ["127.0.0.1:4647"] // Point the client to our server
}

Now, open two separate terminal windows.

In the first terminal, start the Nomad server.

# Terminal 1: Start the server
# The -dev flag is great for local testing.
# For production, you'd use the config file.
sudo nomad agent -dev

In the second terminal, you would typically start the client. The -dev flag conveniently starts both a server and a client, so for this simple demo, one command is enough. In a real multi machine setup, you would run the server on one set of machines and the client on others.

You can check the status of your node with:

# Check that our node is ready
nomad node status

You should see one node listed with a status of ready. That’s it. You have a running Nomad cluster.

Defining and Running Your First Job

In Nomad, you describe the work you want to run using a job file written in HCL. This file tells Nomad everything it needs to know: what to run, how many copies, and what resources it needs.

Let’s create a job file to run a simple Nginx web server. Save this as nginx.nomad.hcl.

// nginx.nomad.hcl
job "web-server" {
  datacenters = ["dc1"] // Target our default datacenter
  type = "service"

  group "nginx-group" {
    count = 2 // Run two instances of our Nginx container

    network {
      port "http" {
        // Nomad will find an open port on the host and map it
        // to port 80 inside the container.
      }
    }

    task "nginx-task" {
      driver = "docker" // Tell Nomad to use the Docker driver

      config {
        image = "nginx:1.21"
        ports = ["http"] // Name of the port defined in the network block
      }

      resources {
        cpu    = 100 // MHz
        memory = 128 // MB
      }
    }
  }
}

This job file defines a job named “web-server”. Inside, a group specifies that we want two instances. The task block is the core unit of work, instructing Nomad to run the nginx:1.21 Docker image.

To run this job, execute the following command:

nomad job run nginx.nomad.hcl

Nomad will now schedule the two Nginx containers on your client node. You can check the status of the job:

nomad job status web-server

You’ll see two allocations, each with a status of running. To find out which ports they are running on, you can inspect an allocation:

# Use the ID from the status command
nomad alloc status <ALLOCATION_ID>

Look for the “Network” section in the output. You’ll see the dynamic host port that Nomad assigned, which maps to port 80 in your container.

Built In Service Discovery

Now for the real magic. When you have multiple microservices, they need to communicate. How does a frontend service find the api service when its IP address and port are assigned dynamically by the scheduler?

Many systems require an extra tool like Consul or etcd for service discovery. Nomad has service discovery built in. Let’s see how it works by creating a job with two services: a simple API and a dashboard that needs to connect to it.

Save this as app.nomad.hcl.

// app.nomad.hcl
job "my-app" {
  datacenters = ["dc1"]

  group "api" {
    count = 1

    network {
      port "http" {}
    }

    // This service block makes the service discoverable
    service {
      name = "api-service"
      port = "http"
      // Nomad will automatically run health checks
      check {
        type     = "tcp"
        interval = "10s"
        timeout  = "2s"
      }
    }

    task "api-server" {
      driver = "docker"
      config {
        image = "hashicorp/counting-service:0.0.2"
        ports = ["http"]
      }
    }
  }

  group "dashboard" {
    count = 1

    network {
      port "http" {
        static = 9002 // For easy access in our browser
      }
    }

    service {
      name = "dashboard-service"
      port = "http"
    }

    task "dashboard-ui" {
      driver = "docker"
      config {
        image = "hashicorp/dashboard-service:0.0.4"
        ports = ["http"]
      }
      
      // Inject the API address as an environment variable
      env {
        COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_api-service}"
      }
    }
  }
}

The key parts here are the service blocks and the env block in the dashboard task.

  1. We added a service block to the api group. This registers the service with Nomad’s internal catalog under the name api-service.
  2. In the dashboard task, we use a special environment variable: ${NOMAD_UPSTREAM_ADDR_api-service}. Nomad automatically replaces this variable with the correct ip:port of a healthy api-service instance when it starts the dashboard container.

Run the job:

nomad job run app.nomad.hcl

Once the allocations are running, you can access the dashboard in your browser at http://localhost:9002. You will see that it has successfully connected to the counting API service, all without you having to configure any IP addresses or ports manually. This seamless discovery is a huge simplification for microservice architectures.

Managing Secrets with Vault

Another common challenge is managing secrets like API keys and database passwords. You should never hardcode them in your job files. Nomad integrates directly with Hashicorp Vault, a tool for managing secrets.

While a full Vault setup is beyond this post, the integration allows you to define a template block in your task. This template can fetch secrets from Vault and write them to a file inside your container just before your application starts. Your application can then read the secrets from the file, never knowing they came from Vault. This provides a secure and clean way to manage credentials.

Why Choose Nomad?

Kubernetes is an amazing platform, but its power comes with significant complexity. Nomad offers a compelling alternative, especially if your needs are focused on application scheduling.

  • Simplicity Wins: A single binary for both server and client makes installation and management much easier. The concepts are more straightforward, allowing small teams to become productive quickly.
  • Architectural Flexibility: Nomad is not just for Docker. It has drivers for running standalone binaries, Java applications, and even virtual machines with QEMU. This allows you to manage a diverse set of workloads on one platform.
  • Federation First: Nomad was designed from the ground up to support multi region and multi cloud deployments. Connecting clusters is a native feature, not an addon.
  • Focus on Scheduling: Nomad focuses on one thing and does it exceptionally well: scheduling applications. It leaves other concerns like storage and advanced networking to be integrated with best of breed tools, giving you more choice.

Where to Go From Here

We’ve only scratched the surface, but you can already see how Nomad simplifies the process of deploying and connecting containerized applications. It hits a sweet spot, providing powerful scheduling and service discovery without the operational burden of a more complex system.

If you’re managing a growing number of microservices and feeling the pain of manual deployments, give Nomad a try. Your next steps could be to explore the Nomad web UI, try deploying a stateful application with a volume, or dig deeper into the integration with Vault for secrets management. You might find it’s the perfect orchestrator for your team.