Using Ansible (Part II)
Continuing where we left off in the previous post, let’s look at how we can start to organize the project a bit more and do some things like add configuration files. To see where we end up checkout the part-ii branch of the companion resources. Since we’re probably going to be destroying and rebuilding the VMs fairly often, we’ll add an Ansible configuration file, ansible.cfg, and turn off host key checking. This is definitely a security risk so you wouldn’t not want to do this normally but for playing around with Ansible early on it can be convenient.
$ git checkout part-ii
Modifying The Playbook
For very simple things you might want to just keep all of the plays (the set of tasks to run) in a single file but that’s quickly going to become too large to manage and probably not the way you’ll want to go. Let’s start by breaking out the sections we had previously into their own playbooks. You’ll notice we’re going to mostly follow the Ansible best practices.
Playbooks can include other playbooks and it can be useful to break them out this way so you can run specific parts of your environment rather than the entire thing. Let’s start by moving each section into its own playbook and include them all in a “master” playbook. This will be the playbook we can run to do everything in one big run.
Each section we had before will now go into their own respective playbook, we haven’t changed anything about them yet. (this code isn’t in the branch but is an step to get us there)
common.yml
--- - hosts: all sudo: yes tasks: - name: update apt cache apt: update_cache=yes
web.yml
--- - hosts: web sudo: yes tasks: - name: install nginx apt: name=nginx state=present
db.yml
--- - hosts: db sudo: yes tasks: - name: install mariadb apt: name={{ item }} state=present with_items: - mariadb-server - mariadb-client
master.yml
--- - include: common.yml - include: web.yml - include: db.yml
While we’re at it, let’s move the inventory file somewhere else and give it a better name. We’ll create an inventories directory and throw it there as vagrant. Now if we decide to run this somewhere else like AWS or Openstack we could simply swap out the inventory for another one.
Now to run our environment we could run the same Ansible command as we did before but provide the new master playbook instead.
$ ansible-playbook -i inventories/vagrant -u vagrant -k master.yml
Or if we just wanted to run the database part of our environment we could provide the database playbook instead.
$ ansible-playbook -i inventories/vagrant -u vagrant -k db.yml
Roles
Ansible has the concept of roles which let you organize things with a convention and separate things out nicely for reuse and composition. How you split up your roles will be up to you but in general you want to keep them fairly focused on a single responsibility.
You can checkout Ansible Galaxy for a large variety of existing roles for many things you might want to do. But building your own is pretty straight forward too.
Roles can be found under the roles directory. At a minimum you’ll need a tasks directory in the role where we’ll move the tasks from each of our playbooks into. Then we’ll reference the role in the playbook instead.
By convention the default YAML file Ansible will look for is main.yml. You’ll see that’s the same for the other directories under the role as well, so you’ll see a lot of main.yml files in your project tree.
So we’ll move each of the sections under our tasks in the playbooks into their own main.yml files under their respective tasks directory.
For the web playbook it would looks like this:
web.yml
--- - hosts: web sudo: yes roles: - web
This tells Ansible to look in the roles directory for our web role (matching directory name) and it will automatically find the main.yml under the tasks directory. In the main.yml under the tasks directory all we need now is the tasks to run.
roles/web/tasks/main.yml
--- - name: install nginx apt: name=nginx state=present
And we can still run the master playbook and it will be equivalent to our original all-in-one playbook.
$ ansible-playbook -i inventories/vagrant -u vagrant -k master.yml
Variables
Now that we have things organized a bit better it’s not really useful to just have Ansible install packages. That’s only part of what it’s good for. So let’s configure something in one of the roles and make use of Ansible’s ability to use variables.
Variables can be defined in multiple places in Ansible. In the role, you’ll probably use defaults/main.yml. Variables defined in defaults/main.yml can be overridden by variables defined higher up in the order of precedence which can be useful for things that may need to change per environment. Other common places for variables will be in the group_vars directory of the project or the host_vars directory of the project. Ansible also supports vault files where you can store sensitive variables as well.
In the common role, let’s install the NTP service. It’s pretty straight forward. All we’ll need to do it add the following to the tasks/main.yml in the role.
- name: install ntp service apt: name=ntp state=present - name: ensure ntp service is running service: name=ntp state=started enabled=yes
This will make sure the ntp package is installed and also that the ntp service is running. By adding the enabled option we can also ensure that the service will start if the machine is rebooted.
OK, so that installs the NTP service but what if we want to point the pool somewhere else? Let’s add in a list of the servers for the ntp pool. In the common role’s defaults/main.yml we’ll add the following:
ntp_fallback_server: ntp.ubuntu.com ntp_pool_servers: - 0.us.pool.ntp.org - 1.us.pool.ntp.org - 2.us.pool.ntp.org - 3.us.pool.ntp.org
What that’s done is a create two properties that we can use in our common role. These can be used in the tasks or templates which we’ll get to next.
If you’re writing a role that can be used by multiple groups you could use the group_vars directory of the project to create a file that matches the group name and that variable will then be used for all of the servers in that group. (e.g. group_vars/web.yml)
If you need values to vary by the host, Ansible also provides the host_vars directory and you can place a file in that directory that matches the hostname in the inventory and the the task and templates will use that value for the specific server. (e.g. host_vars/web0.yml)
Files And Templates
Ansible can allow manage files and templates for you to deploy as well. Any files you want to be able to push to the machine for things like configurations etc can be placed in the role’s files directory and you can use the copy module to put files somewhere or the template module for put files that need customization based on some configuration values you’ve defined, like our ntp servers. Template files are Jinja2 files and placed under the role’s templates directory. In the part-ii branch you can see we’ve added a template for the ntp.conf file that will override the existing one the package creates.
In the ntp.conf.j2 file you can see we’ve used a Jinja2 loop to output the list of servers for the pool.
roles/common/templates/ntp.conf.j2
... {% for ntp_pool_server in ntp_pool_servers %} server {{ ntp_pool_server }} {% endfor %} server {{ ntp_fallback_server }} ...
We also just used the fallback server property as well.
There are Jinja2 filters that you can use for various manipulation of properties too.
In the common tasks we’ve added the template module after we install the ntp service.
- name: configure ntp template: src=ntp.conf.j2 dest=/etc/ntp.conf notify: restart ntp service
Handlers
Lastly, Ansible has handlers that can be triggered based on changes. Handlers are placed in the role’s handlers/main.yml and are basically just like what get’s put in tasks/main.yml but get called based on the notify option of a task.
roles/common/handlers/main.yml
--- - name: restart ntp service service: name=ntp state=restarted
If you notice in the previous section there’s a notify parameter in the configuration task. What that does is signal Ansible to run the corresponding handler (matching the name) at the end of the script. If you make multiple configuration changes you probably don’t want to keep restarting a service so this is useful for that sort of thing.
That should be quite a few of the basics to get you started.
One thought on “Using Ansible (Part II)”