Configuring Service Discovery with Consul in a Hybrid Container Environment

Over the last year, we’ve been actively working on migrating our existing micro-service infrastructure hosted in AWS to a container based solution. In this case, we have chosen Rancher. If you haven’t given Rancher a try yet, I highly recommend it. The entry bar is much lower than something like Kubernetes or Mesosphere.

We currently use Consul as our service discovery mechanism in AWS. This allows us to rely on DNS to discover dependent services. As we’ve reached the point of starting to plan how to migrate to our new cluster, we’ve needed to figure out how to integrate the cluster environment with Rancher. We want our un-migrated services to be able to discover and reach the services that have been migrated into the cluster.

The trick here is to run the Consul agent on all the cluster hosts and use Registrator to automatically create the Consul service entries when a container starts on that host. Since Rancher runs it’s own software defined networking (SDN) for the cluster, the networking component of what gets exposed in Consul is a bit tricky.

Since I spent a solid day getting the settings Baby Bear’ed (that’s “Just Right”), I figured I should post them for other people looking to find them.

The Docker Compose configuration file sets up 3 components. When deployed in Rancher, this stack is configured as a Global service that will deploy on every host. Additionally, it’s configured as a single service with 2 sidekicks services which forces them to all run on the same host together so that volume sharing works.

Let’s walk through the individual pieces of the stack.


The consul-data container simply provides a static docker-volume for storing Consul data for the host. This allows use to destroy the Consul container without losing any of the persistent data.


The consul container runs the Consul agent process and joins the host to the cluster. This service uses the official Consul container from Docker hub (line 52). There is a number of configuration items for this container to allow the dockerized agent process work correctly.

Starting at line 21, we are exposing a series of TCP/UDP ports. These are all the ports that the Consul agent uses to communicate with the cluster. Many of them are associated with the underlying Serf & Raft protocols. Additionally 8500/tcp is the HTTP API for the agent and 8600/UDP is the DNS interface.

Line 38 configures this container to use host networking. This allows the consul agent to bind to the host machine’s network interface instead of the virtual docker interface. This allows our Consul traffic to interface with the the rest of the Consul cluster that is not in Rancher.

Starting at line 40, we configure the runtime options for the agent:

  • agent: tells Consul to run in the agent mode (not a server)
  • configures the agent to attempt to join the Consul servers at the specified hostname. You will need to configure this to the appropriate resolvable DNS entry or IP address for your Consul servers.
  • -recursor= configures the Consul DNS interface to forward requests to the Rancher DNS server. If you are using this outside of Rancher, you’ll want to configure this to the appropriate IP address, or remove it
  • -client= configure Consul to bind to all host network interfaces. This allows the HTTP API to be reachable from containers running on the host as well as other machines in the cluster.


Additionally, we configure a number of environment variables at line 46. CONSUL_LOCAL_CONFIG passes a JSON configuration file for the agent to leave the Consul cluster gracefully on exit, CONSUL_BIND_INTERFACE configures the agent to bind to eth0 for the Raft/Serf protocols.

Finally at line 49, we mount our data volumes from the consul-data container.


This service is responsible for automatically discovering and registering the other docker containers that are launched on this host. The configuration is standard here, at line 10 we configure the backend for Registrator to be Consul and discoverable on the same machine at port 8500 (the HTTP API interface for the Consul agent). Again we configure this container to run in host networking mode (line 19) and at line 16 we map the Docker daemon sockets into the container so that Registrator can query Docker.

With this configuration, we’ll see the following behavior:

  • Each host joins the Consul cluster and registers it’s private IP address
  • When a container launches on the host
    • Registrator detects the container (assumes that the proper labels are configured for Registrator and the ports are being exposed)
    • Registrator calls Consul and creates a service entry
    • The service entry will specify the host’s IP address and the mapped port on the host


With this setup, un-migrated applications that discover services that have been migrated to the cluster, will find the service running on the host which is running them and the mapped ephemeral port to the host machine from the container.

About the Author

Object Partners profile.
Leave a Reply

Your email address will not be published.

Related Blog Posts
Natively Compiled Java on Google App Engine
Google App Engine is a platform-as-a-service product that is marketed as a way to get your applications into the cloud without necessarily knowing all of the infrastructure bits and pieces to do so. Google App […]
Building Better Data Visualization Experiences: Part 2 of 2
If you don't have a Ph.D. in data science, the raw data might be difficult to comprehend. This is where data visualization comes in.
Unleashing Feature Flags onto Kafka Consumers
Feature flags are a tool to strategically enable or disable functionality at runtime. They are often used to drive different user experiences but can also be useful in real-time data systems. In this post, we’ll […]
A security model for developers
Software security is more important than ever, but developing secure applications is more confusing than ever. TLS, mTLS, RBAC, SAML, OAUTH, OWASP, GDPR, SASL, RSA, JWT, cookie, attack vector, DDoS, firewall, VPN, security groups, exploit, […]