idempotence is paramount
we don't care much about idempotence(i.e. every playbook execution causes changes)
include_role
+ task_from
or include_tasks
:
goto
(i.e. : ugly and cheating )block
)loop
and make the whole include
unnecessaryincludes
in general (this is not Ansible-specific), there are different approaches :
include
d only if it's include
d at least twice. Otherwise :
include
during executioninclude configureDatabase include addDbUsersis explicit enough and doesn't require opening extra files
include includedFile
includedFile :
if condition do things
if condition include includedFileincludedFile :
do things
include
is processed only if condition is true, not everytime# I explain what the code below does - moduleName : arg1: foo arg2: barinstead of :
- name: "Description of what's happening below" moduleName : arg1: foo arg2: bar⇒ use
name:
, this is what it's for.
Variables declared in vars have a higher precedence than those from defaults.
Many articles describe the purpose of these directories explaining that they are :IMHO, this is not completely wrong, but not totally right either. It makes sense to discriminate variables on criteria like those above, but the Ansible documentation does not enforce such rules.
Regarding the vars and defaults directories, it's only about variable precedence.
notify
directive, implement this event-driven behavior.#!/usr/bin/env bash ######################################### makeAnsibleRoleSkeleton.sh ################################ # Create the directory structure and some files for an Ansible role ########################################## ########################################################## rolesDirectory='/opt/ansible/roles' newRoleName=$1 [ -z "$newRoleName" ] && { echo "Usage : $0 <new role name>" exit 1 } mkdir -p "$rolesDirectory/$newRoleName/"{tasks,handlers,templates,files,vars,meta} echo '---' | tee "$rolesDirectory/$newRoleName/"{tasks,handlers,vars,meta}/main.yml > /dev/null
As seen in the Introduction to Ansible article, Ansible can be used to perform ad-hoc tasks. But it can also execute procedures called playbooks (examples).
Playbooks :---
# this is my 1st playbook
- hosts: groupSlaves
tasks:
- name: test connection
ping:
groupSlaves is the group name defined in the inventory file.
PLAY [groupSlaves] ***************************************************************** GATHERING FACTS *************************************************************** ok: [192.168.105.80] ok: [192.168.105.114] TASK: [test connection] ******************************************************* ok: [192.168.105.80] ok: [192.168.105.114] PLAY RECAP ******************************************************************** 192.168.105.114 : ok=2 changed=0 unreachable=0 failed=0 192.168.105.80 : ok=2 changed=0 unreachable=0 failed=0
name
lines.
--- # playbook_web.yml - hosts: myGroup1:myGroup2 vars: apacheUser: 'www-data' apacheGroup: 'www-data' documentRoot: '/var/www/test/' # final '/' expected websiteLocalPath: '/root/' websiteConfigFile: 'test.conf' tasks: - name: install Apache apt: name=apache2 state=present - name: disable default Apache website shell: a2dissite default - name: define Apache FQDN shell: echo "ServerName localhost" > /etc/apache2/conf.d/fqdn - name: create docRoot file: state=directory path={{ documentRoot }} owner={{ apacheUser }} group={{ apacheGroup }} - name: deploy website copy: src={{ websiteLocalPath }}index.html dest={{ documentRoot }} owner={{ apacheUser }} group={{ apacheGroup }} - name: deploy website conf copy: src={{ websiteLocalPath }}{{ websiteConfigFile }} dest=/etc/apache2/sites-available/ - name: enable website shell: a2ensite {{ websiteConfigFile }} - name: reload Apache service: name=apache2 state=reloaded enabled=yes - hosts: 127.0.0.1 connection: local tasks: - name: check everything is ok on webserver 'myGroup1' shell: wget -S -O - -Y off http://192.168.105.114/index.html - name: check everything is ok on webserver 'myGroup2' shell: wget -S -O - -Y off http://192.168.105.80/index.html
Let's say you just deployed a new web server + a web application. Wouldn't it be great if you could run some checks at the end of the playbook, just to make sure everything's responding as expected ? To do so, you'd have to run some commands from the master host : use this code as the last play of your playbook :
- hosts: 127.0.0.1 connection: local tasks: - name: make sure blah is blah. shell: 'myCheckCommand'
Testing with myCheckCommand being true, execution of this specific play returns :
PLAY [127.0.0.1] ************************************************************** GATHERING FACTS **************************************************************** ok: [127.0.0.1] TASK: [make sure blah is blah.] *********************************************** changed: [127.0.0.1] PLAY RECAP ********************************************************************* 127.0.0.1 : ok=2 changed=1 unreachable=0 failed=0
Now with false, output becomes :
PLAY [127.0.0.1] ************************************************************** GATHERING FACTS **************************************************************** ok: [127.0.0.1] TASK: [make sure blah is blah.] *********************************************** failed: [127.0.0.1] => {"changed": true, "cmd": "false", "delta": "0:00:00.015746", "end": "2014-10-16 15:11:49.153606", "rc": 1, "start": "2014-10-16 15:11:49.137860"} FATAL: all hosts have already failed -- aborting PLAY RECAP ********************************************************************* to retry, use: --limit @/root/fileNameOfMyPlaybook.retry 127.0.0.1 : ok=1 changed=0 unreachable=0 failed=1
- hosts: webservers roles: - role_X - role_Y
- hosts: webservers tasks: - import_role: name: role_X - include_role: name: role_Y
import_role
and include_role
considering the static or dynamic import that will be performed.- hosts: webservers roles: - { role: role_X, myVariable: "42", tags: "tag1, tag2" }
- hosts: webservers roles: - role: role_X vars: myVariable: "42" tags: - tag1 - tag2
- hosts: webservers tasks: - import_role: name: role_X vars: myVariable: "42" tags: - tag1 - tag2
all
that has everybody).[groupSlaves] group name slave1 ansible_host=192.168.105.114 details slave2 ansible_host=192.168.105.80 [myGroup1] group name slave1 [myGroup2] group name slave2
all
includes all slaves.ungrouped
. The logic behind Ansible is that all slaves must belong to at least 2 groups : all
and "an other one". If there is no such "other one", ungrouped
will be that one.Ansible is installed on a master host to rule them all. There's nothing to install on slaves (except SSH keys ).
Host slave1 hostname 192.168.105.114 user root IdentityFile ~/.ssh/id_rsa Host slave2 hostname 192.168.105.80 user root IdentityFile ~/.ssh/id_rsa
192.168.105.114 # slave1 192.168.105.80 # slave2
192.168.105.114 | success >> { "changed": false, "ping": "pong" } 192.168.105.80 | success >> { "changed": false, "ping": "pong" }
192.168.105.80 | success >> { "ansible_facts": { "ansible_processor": [ "Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz" ], "ansible_processor_cores": 1, "ansible_processor_count": 1, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 1 }, "changed": false }
192.168.105.114 | success | rc=0 >> ansibleSlaveThis is for basic commands (single binary, no options).
192.168.105.114 | success | rc=0 >> ansibleMaster
192.168.105.114 | success | rc=0 >>
$(hostname)
192.168.105.114 | success | rc=0 >> ansibleSlave
192.168.105.80 | success | rc=0 >> ansibleSlave2 192.168.105.114 | success | rc=0 >> ansibleSlave
ansible all -m copy -a "src=/home/test.txt dest=/home/"
Ansible can query its slaves about software using some dedicated packages :
Possible values : installed, latest, removed, absent, present.
192.168.105.114 | success >> { "changed": false } 192.168.105.80 | success >> { "changed": false }If the specified package was not already installed, this will install it. The FULL command output (install procedure) will be reported by Ansible.
192.168.105.80 | success >> { "changed": false } 192.168.105.114 | success >> { "changed": false }
192.168.105.114 | success >> { "changed": true, "comment": "", "createhome": true, "group": 1001, "home": "/home/bob", "name": "bob", "shell": "/bin/sh", "state": "present", "system": false, "uid": 1001 }And if I run the same command again, whereas Bob's account already exists :
192.168.105.114 | success >> { "append": false, "changed": false, "comment": "", "group": 1001, "home": "/home/bob", "move_home": false, "name": "bob", "shell": "/bin/sh", "state": "present", "uid": 1001 }
192.168.105.114 | success >> { "changed": true, "force": false, "name": "bob", "remove": true, "state": "absent" "stderr": "userdel: bob mail spool (/var/mail/bob) not found\n" }Running the same command again (no user account named Bob anymore) :
192.168.105.114 | success >> { "changed": false, "name": "bob", "state": "absent" }remove=yes instructs Ansible to delete the homedir as well.
192.168.105.114 | success | rc=0 >>
root
daemon
bin
nobody
libuuid
messagebus
bob
Start a service : ansible all -m service -a "name=ssh state=started"
192.168.105.114 | success >> { "changed": false, "name": "ssh", "state": "started" } 192.168.105.80 | success >> { "changed": false, "name": "ssh", "state": "started" }Accepted states :