Using Ansible (Part I)

This is hopefully going to be a series of using Ansible to help with automating your infrastructure. It will start off with the basics and then move on to more complicated scenarios.

What Is Ansible?

Ansible is a configuration management and deployment tool similar to Chef, Puppet, or Salt. If you’ve ever written scripts to bootstrap your servers after they’re spun up, Ansible will probably feel very familiar.

The Ansible website has some very good documentation on installation steps and an overview of what Ansible is.

Ansible runs modules over SSH (by default) that get executed on the remote machine and cleaned up when completed. The packaged modules are written in Python, but can be written in many languages; pretty much anything that can run on the machine. It uses YAML files to define a series of steps and executes them for a defined set of resources. They call these playbooks and a typical project will have multiple playbooks to handle specific parts of provisioning, configuration, deployment, etc. Ansible allows for organization through the use of roles as well, which can be helpful for reusing and structuring things. Playbooks can also include other playbooks, tasks, or roles to allow for composition of a more complex setup.

Ansible also has the ability to run ad-hoc commands against a set of resources which can be quite useful for troubleshooting multiple servers in a distributed architecture.

And since Ansible is essentially just executing a series of steps, you can use some of the modules to help with testing and other tasks that might not be typically thought of. Later in the series we’ll try to explore some of that.

Getting Started

It’s going to be assumed that you already have Ansbile installed and for the purposes of this series we’ll be rolling with a Vagrant/Virtualbox setup to play around with things. The Vagrantfile will be included as a resource so you should just be able to clone the github repository and follow along.

Resources are available on Github at Simply checkout the appropriate branches and follow along. To start with, checkout the part-i branch.

$ git clone
$ cd ansible-series
$ git checkout part-i

That should set us up for now.

Vagrant Up

To start with, we’ll need some servers provisioned. Ansible can handle provisioning but for now we’ll defer that to Vagrant and use Ansible just for configuration instead. Vagrant can use Ansible for provisioning, but for the purposes of this series we’ll be running Ansible manually.

So to start, let’s spin up a web server and a db server.

$ vagrant up

After a moment we should have two stock Ubuntu VMs.

The Inventory File

Ansible uses inventory files to provide a list of resources to run the playbooks against. You may have one single inventory file or multiple depending on how you setup your playbooks. In the inventory file we’ll be using we’ll specify the two servers and create two additional groups to put them in: a web group and a db group. This will allow us to restrict what plays get run based on the type of machine we need to configure.

Because we’re using aliases for the hosts we’ll need to specify and additional variable ansbile_ssh_host for each of them to tell Ansible how to connect to them with SSH. We can actually add other variables for each host here but there’s a better way to do that using hosts_vars that we’ll get to in another part of the series.


web0    ansible_ssh_host=
db0     ansible_ssh_host=



The Playbook

This one is going to be pretty light. All of the tasks will be in one file.


- hosts: all
  sudo: yes
    - name: update apt cache
      apt: update_cache=yes

- hosts: web
  sudo: yes
    - name: install nginx
      apt: name=nginx state=present

- hosts: db
  sudo: yes
    - name: install mariadb
      apt: name={{ item }} state=present
        - mariadb-server
        - mariadb-client

There are three sections in the playbook.

The first one updates the apt cache on all of the machines in the inventory. This is useful if there are common tasks that need to be done for all machines. The hosts property defines which group of resources to run the tasks on. The sudo property is set since the tasks will need to run with sudo permission. It’s worth noting here that the command line has a –sudo flag that would make setting sudo on each of these sections unnecessary but it’s there just to point out the option.

The second section installs Nginx on each host defined in the web group, which for now is just one, but if there were multiple it would install Nginx on all of them (in parallel). Here we’re using the apt module provided by Ansible. This module handles things related to package management using apt. There are two options provided: the name of the package and the state we’re expecting, present. There are other states like absent or latest but present will make sure the package is installed and is the default. What you set this to will likely be determined by your deployment process, if you keep servers around or replace them.

The third section installs MariaDB. In this case we’re still using the apt module but this time we use loops to install multiple packages.

Run The Playbook

After the VMs have spun up we can run the Ansible playbook to configure them.

$ ansible-playbook -i inventory -u vagrant -k playbook.yml

Let’s look at that command for just a moment.

ansible-playbook is the command we’ll use to run the playbooks. The -i flag is used to specify which inventory file to use for the playbook. We’ll also need to provide a username to connect to the servers with since we’re using SSH to connect and run the modules. The -u flag sets the username from the command line but there other ways to set this per host or group that we’ll see later. The -k flag indicates that Ansible should prompt for a password from the user. In other environments you’ll likely be using certificate authentication but passwords are fine here. The default password for Vagrant VMs is “vagrant” in case you’re about to go search for it. And finally we provide the playbook file to run.

The first time you run the playbook you’ll see some output as Ansible is running the playbook. If all goes well you’ll see a few OKs and 1 changed for each of the hosts in the summary at the end. The OK indicated Ansible didn’t have to do anything during execution of the module and the changed indicates that it had to modify the state of the machine, in this case install the package. If you run the playbook again with the same command, you should see all OK and no changes.

Leave a Reply

Your email address will not be published. Required fields are marked *