Ansible - Simple IT Automation

"New platform" cheatsheet

Since it's always the same but I regularly forget basic steps when starting working on a new platform with Ansible, here's my cheatsheet :
  1. configure SSH properly :
  2. try a manual connection to SSH hosts :
    • to confirm the SSH configuration is fine
    • necessary if you have not disabled the host key checking
  3. build the inventory file
    • you can choose between INI and YAML formats
    • AFAIK, both work fine. INI is more readable IMHO.
    • they differ in the way they interpret values assigned to variables (details)
  4. test the connection to slaves by Ansible :
    ansible --inventory=myInventoryFile all -m ping
    slave1 | SUCCESS => {
        "changed": false,
        "ping": "pong"
    slave2 | SUCCESS => {
        "changed": false,
        "ping": "pong"
    slave3 | SUCCESS => {
        "changed": false,
        "ping": "pong"
  5. check you can connect to slaves and gather facts :
    ansible --inventory=myInventoryFile all -m setup
  6. launch a playbook carefully :
    ansible-playbook --inventory=myInventoryFile --limit=mysql myPlaybook.yml -u ansible -DC
  7. to be continued...

How to interrupt a playbook (i.e. something like --stop-at-task ?)

Situation :

Solution :

This will do the job perfectly :
- meta: end_play
The only drawback of this method is that it interrupts the playbook so nicely (no reference in the execution log) that people not aware of it (teammates or the "future you") will ignore the playbook was stopped on purpose before the end.

Alternate solution :

You may be interested in this :
- fail:
    msg: "Playbook stopped on purpose for whatever reason"
Unlike the previous solution, this will display explicit red error messages you can't ignore .

static vs dynamic

  • made with import* directives (import_tasks, import_playbook, ...)
  • pre-processed during playbook parsing time
  • the tags and when directives of a task are copied to all its children
  • made with include* directives (include_tasks, include_role, ...)
  • processed during runtime at the point in which that task is encountered
  • the tags and when directives of a task apply to that task only and are not copied to its children

shell and command

Their differences :

  • shell is executed by /bin/sh on the remote node
  • command is NOT executed in a shell on the remote node, so shell-specific variables (like $HOME) and commands (>, |, &, ...) are NOT supported.

Subtleties they share :

Since both behave mostly the same way and accept the same hacks, I won't be repeating "shell or command" hereafter .

changed is always True
This is because Ansible has no mechanism for understanding whether the command run by shell actually changed anything (source).
To workaround this, you can instruct Ansible what to consider as a change with changed_when :
- name: check whether ...
  shell: someCommand
  register: myVariable
  changed_when: false			never report this task as changed
  ignore_errors: true
- shell: someCommand
  register: myVariable
  changed_when: "myVariable.rc != 2"	change based on return code
If you're running a shell command to decide whether or not to run a subsequent command (i.e. to guarantee idempotence) you _may_ have to use ignore_errors: true. This is required in such case because this shell command will return a success / failure return code. However, since hosts with failed tasks are removed from the targets of the playbook, we must instruct Ansible to ignore such errors and continue working on them.
shell warning : [WARNING]: Consider using yum module rather than running yum (source)
Ansible may warn when shell is used to perform an action that ought to be done via one of the numerous Ansible modules. This is right most of the time : using built-in modules is cleaner and is the best solution to achieve idempotence. But the warning is not always appropriate, since the suggested solution doesn't work (or doesn't exist (yet)). Regarding this yum example, the warning :
  • is perfectly legitimate when trying to install / remove / updates packages : use yum instead of shell
  • falls flat if we're trying to run a command that is not (yet) supported by the yum module, such as listing repositories :
    shell: yum repolist enabled | grep "{{ redhat_repository_optional }}"
In the latter situation, there is no alternative to using shell, but we'd like to hide this warning anyway. To do so :
  • Ansible actually sends warnings based on a list of keywords following shell:, so let's fool it with which and the $() construct :
    shell: $(which yum) repolist enabled | grep "{{ redhat_repository_optional }}"
  • other solution :
    shell: yum repolist enabled | grep "{{ redhat_repository_optional }}" warn=no
    Does the job but poor readability
  • Better :
    shell: yum repolist enabled | grep "{{ redhat_repository_optional }}"
      warn: no

ansible and ansible-playbook CLI flags

These flags are common to the ansible and ansible-playbook commands.

Flag Usage
-a 'arguments' --args='arguments' Pass arguments to the module specified with -m Syntax : -a 'arg1Name=arg1Value arg2Name=arg2Value'
-b --become Run operations with become
Does not imply password prompting, use -K.
Provided I get this correctly, -K _may_ not be necessary if commands requiring "sudo" privileges are configured with NOPASSWD. But that wouldn't be very safe (and I've not been able to make this work so far...). So use -K whenever -b is there.
-C --check Do not make any changes on the remote system, but test resources to see what might have changed.
This can not scan all possible resource types and is only a simulation.
-D --diff
  • When changing any templated files : show the unified diffs of how they changed.
  • When used with --check (i.e. when simulating) : show how the files would have changed.
-e --extra-vars Specify additional variables in key=value format or from a YAML / JSON file, with name prepended with @
-f n --forks=n Launch up to n parallel processes (forks)
-i inventory --inventory=inventory inventory can be :
  • a single host
  • a comma-separated list of hosts
  • the path to the inventory file (defaults to /etc/ansible/hosts)
-k --ask-pass Prompt for the connection password, if it is needed for the transport used (e.g. SSH). details
-K --ask-become-pass Ask for privilege escalation password (i.e. sudo password). details
-l pattern --limit=pattern limit the playbook execution to slaves matching pattern.
Let's imagine you have, and slaves and want to alter the web servers only. pattern could be specified :
  • web*
  • as a regular expression with a leading ~ : ~web[12]\.example\.com
-m moduleName --module-name=moduleName Execute module moduleName (module index)
-t tags --tags tags Only run plays and tasks tagged with these tags. See also : Using tags to slice playbooks
-u remoteUser --user remoteUser connect as remoteUser


As stated by its name, group_vars is for variables applying to one or more group of hosts.
Defining in group_vars variables that don't apply to groups is extremely misleading (although it _may_ work) and is discouraged as this is bad practice ().

group_vars can either be :

a regular file
variables for all groups will be defined here.
This is fine unless this file gets really long, complex and barely readable.
a directory (example)
when there are many groups / variables / both, it becomes easier to have a structure like :
[root] playbook root directory
group_vars (directory)
group1.yml variables for the members of the group1 group
group2.yml variables for the members of the group2 group

i.e. : group_vars is a directory with :
  • 1 file per hostgroup
  • each file has variables for the corresponding group
  • each file is named after the group it applies to

How to override /etc/ansible/ansible.cfg settings with personal values ?

Create ~/.ansible.cfg and replicate + override the required section / values :
host_key_checking	= False
inventory		= /home/stuart/ansible/hosts
roles_path		= /home/stuart/ansible/roles
vault_password_file	= /var/lib/ansible/.vault_password

Run commands on many hosts with the ansible CLI (aka Ad-Hoc Commands)

How it works :

To know more about the Red Hat registration status of the server1 and server2 hosts, it's always possible to run one-liners such as :
for server in server1 server2; do ssh -t "$server" 'sudo subscription-manager status'; done
But Ansible can do it, too :
  • ansible server1,server2 -m command -a 'subscription-manager status' -u root -k
  • ansible myGroupOfHosts -m service -a 'name=network state=started' -u root -k
Some restrictions apply, read "in-line" inventory below.

How to target hosts :

Examples below specify no inventory file, which is ok as long as its name can be determined implicitly (default name, default location, specified in a personal settings file, ...). Otherwise, it must be explicitly specified with -i :
Target Syntax Comment
all hosts known to Ansible ansible all
all hosts of a single group ansible groupName
all hosts of several groups ansible group1:group2:group3 The colon : actually means a logical OR. This command above applies to any host belonging either to group1 or to group2 or to group3
When it comes to complex rules with intersections and exclusions (see examples), it may not be a REAL "logical OR"
a list of hosts ansible host1,host2,host3
all hosts except those matching an expression
  • ansible 'all:!expression*'
  • ansible 'groupName:!badHost'
all hosts of a group except several of them ansible 'groupName:!~(host1|host2)'
  • ~ is used to introduce a regular expression
  • you get the idea to exclude as many hosts as necessary

"in-line" inventory (source) :

When running commands like :
  • ansible host1,host2
  • ansible myGroupOfHosts
we're not actually asking Ansible to do stuff on a list / a group of hosts. We're asking Ansible to work on its whole known inventory, restricted to whatever matches "host1,host2" or "myGroupOfHosts". This is like targeting hosts with :
ansible-playbook -i myInventoryFile -l pattern ...
Depending on :
  • whether myInventoryFile actually exists
  • where you are in your file tree (inventory file in the current directory, ...)
  • how pattern is built
chances are it'll fail on :
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: host1
[WARNING]: Could not match supplied host pattern, ignoring: host2
To specify hosts on-the-fly :
  • ansible all -i host1, -m setup
  • ansible all -i 'host1, host2' -m command -a 'df -h /var/lib/mysql'

Using modules :

How to use with_items ?

Because I can never remember how to use with_items, here's an example :
- name: unmount volume groups
    src: "{{ item.device }}"
    name: "{{ item.mountPoint }}"		will be path in Ansible 2.3+
    state: unmounted
    - { device: '/dev/mapper/vg_data-data', mountPoint: '/var/lib/docker/devicemapper' }
    - { device: '/dev/mapper/vg_data-data', mountPoint: '/var/lib/docker' }

Make Ansible role file tree easily :

#!/usr/bin/env bash
######################################### ################################
# Create the directory structure and some files for an Ansible role
########################################## ##########################################################


[ -z "$newRoleName" ] && {
	echo "Usage : $0 <new role name>"
	exit 1

mkdir -p "$rolesDirectory/$newRoleName/"{tasks,handlers,templates,files,vars,meta}
echo '---' | tee "$rolesDirectory/$newRoleName/"{tasks,handlers,vars,meta}/main.yml > /dev/null

  1. enter the "new role" directory
  2. for subDir in files handlers meta tasks templates vars; do mkdir -p "$subDir"; newFile="$subDir/main.yml"; echo '---' > "$newFile"; git add "$newFile"; done
  3. don't forget to commit


Regexp-search + replace line only when the regexp matches (source)

  • use backrefs: yes
    - name: do something
        dest: /path/to/file
        regexp: '^what we are looking for$'
        line: 'the new line that will replace the whole line matched by the regexp above'
        backrefs: yes
  • OR update the regexp so that it matches both the original AND changed lines

unsupported parameter for module: path

Before Ansible 2.3, path was dest, which explains those frequent error messages. This is in the manual, actually, but VERY easy to miss if going too fast .

Using tags to slice playbooks

Usage :

Let's consider myPlaybook.yml :
- hosts: all
    - roleA
    - roleB
    - roleC

- hosts: sql
    - roleD
    - roleE
    - sqlOnly
It is possible to play roles roleD and roleE on members of the sql host group with :
ansible-playbook -i myInventoryFile --diff --check -t sqlOnly myPlaybook.yml
To specify several tags :
ansible-playbook -t 'tag1,tag2'
tags may also be applied to tasks & al.

Special tags (source) :

will always run a task, unless explicitly skipped with --skip-tags always
will prevent a task from running unless a tag is explicitly requested (i.e. never must be associated with another tag)
will run tasks that have at least 1 tag
will run tasks that have no tag
will run all tasks
By default, Ansible runs as if --tags all had been specified.

Tags inheritance (source) :

Tags added to :
  • a play
  • or to statically imported tasks and roles (i.e. when using an import_... directive)
adds those tags to all of the contained tasks.
This is referred to as tag inheritance.
Tag inheritance is not applicable to dynamic inclusions such as include_role and include_tasks.
When tags is applied to... it affects the object having the tag ...and its children too
a play Yes Yes
anything that is import_*ed Yes Yes
anything that is include_*ed Yes No

Related directives :

Not only do tags allow to run specific parts of a playbook, but they also allow skipping parts :
  • to skip a single tag :
    ansible-playbook myPlaybook.yml [options] --skip-tags tagToSkip
  • to skip several tags :
    ansible-playbook myPlaybook.yml [options] --skip-tags 'tag1,tag2'
execute subparts of a playbook (without relying on tags)

ansible-playbook : prompt for passwords with -k and -K

Everything below also applies to ad-hoc commands launched with ansible.


  • kevin is sitting at ansibleMaster, where Ansible is installed
  • kevin wants to perform some actions, with Ansible, on ansibleSlave

Running a playbook

kevin@ansibleMaster$ansible-playbook [some work ...] ansibleSlave
  • will connect to ansibleSlave as kevin via SSH and will do [some work ...]
  • requires kevin to be able to "ssh kevin@ansibleSlave"
  • works the same with or without SSH keys


If, on ansibleMaster, /home/kevin/.ssh/config looks like :
Host ansibleSlave
	User stuart
	IdentityFile ~/.ssh/id_rsa
  • Then :

    kevin@ansibleMaster$ansible-playbook [some work ...] ansibleSlave

    will do [some work ...] on ansibleSlave as stuart, still via SSH, and using key authentication (so no password required).
  • Otherwise (no key authentication configured), Ansible would need to be instructed to prompt for stuart's password on ansibleSlave with -k.

When escalated privileges (i.e. sudo) are necessary

You'll have to include into the ansible / ansible-playbook command line :
for the SSH part for the sudo part
playbook using become
  • make sure you can ssh myself@ansibleSlave
  • without SSH keys : -k
  • with SSH keys : nothing special
ad-hoc command or
playbook not using become

Commands that require escalated privileges need not specifying sudo : this would be redundant with -b :

ansible all -i server, -bK -m command -a 'whoami'
server | SUCCESS | rc=0 >>
Without -b, with sudo :
ansible all -i server, -K -m command -a 'sudo whoami'
server | FAILED | rc=1 >>
sudo: no tty present and no askpass program specified

When "ansible -bkK " keeps failing

If you can successfully run commands manually (ssh myself@ansibleSlave + sudo command) while ansible(-playbook)? -bkK fails :
TASK [setup] *******************************************************************
fatal: [ansibleSlave]: UNREACHABLE! => {"changed": false, "msg": "Authentication failure.", "unreachable": true}
check the points below.
Make sure your local SSH configuration (~/.ssh/config) doesn't interfere
A User directive can send a different login name : grep -i user ~/.ssh/config
Make sure the remote SSH configuration (/etc/ssh/sshd_config) is still appropriate
It _may_ not be up-to-date on a given host because the playbook managing it has not been run for a long time, hence missing / colliding options
Make sure the SSH connection is opened the way you mean it to be

For instance, you may have typed :

ansible-playbook myPlaybook.yml -l 'ansibleSlave' -u $USER -bkK -D
expecting the SSH connection to be opened as $USER (i.e. ssh $USER@ansibleSlave), then sudo to run the playbook tasks.

Check it by making Ansible verbose :

ansible-playbook -vvv myPlaybook.yml -l 'ansibleSlave' -u $USER -bkK -D

So despite my specification, the connection is still open as root.

Turned out that myPlaybook.yml looks like :

- hosts: all
  remote_user: root		GOTCHA!!!
... which explains _WHY_.

This is a BAD10100 practice which effectively forces Ansible to "ssh root@ansibleSlave" (this should _NOT_ be possible).

I can see only ONE _very specific_ usage for remote_user: root : for tasks to run on virtual machines that just have been spawned : they have no local user accounts, no sudo / domain / LDAP configured. But I guess there may be cleaner ways than hardcoding this...

To workaround this :

ansible-playbook myPlaybook.yml -l 'ansibleSlave' --extra-vars "ansible_user=$USER" -kK -D


Usage :

Run an Ansible playbook. Typical usage :

ansible-playbook -i inventory --diff --check myPlaybook.yml

Flags :

CLI flags are common to several Ansible commands / tools. See this dedicated article.

How to specify the target host(s) of a playbook

Single target host :
- hosts: hostname
Several target hosts, as a list :
- hosts: host1, host2, host3
Several target hosts (server01.mycompany.tld, server02.mycompany.tld) as a regular expression :
- hosts: ~server0[12]\.mycompany\.tld
When using this syntax, make sure you're not limiting the effective target with the ansible-playbook -l flag.

The inventory file

The inventory file :
Default groups (source)
  • The group all includes all slaves.
  • There is also another group named ungrouped. The logic behind Ansible is that all slaves must belong to at least 2 groups : all and "an other one". If there is no such "other one", ungrouped will be that one.
Both groups will always exist and don't need to be explicitly declared.

Typical Ansible playbook file tree

[root] ...well, this is the playbook root directory
myPlaybook.yml the main playbook file. This often "includes" the apache.yml, mysql.yml, ...
apache.yml playbook-level stuff for Apache (if any)
mysql.yml playbook-level stuff for MySQL (if any)
inventory1 inventory file
inventory2 inventory file
inventory3 inventory file
main.yml variables of role1 (read more about variable precedence)
files files that will be copied to the slaves (is it supposed to mimic the destination file tree ?)
handlers Changing a configuration file may involve restarting the corresponding daemon. However, multiple changes to the same file must restart the daemon only once. Handlers, triggered by the notify directive, implement this event-driven behavior (details 1, 2).
meta this is where role dependencies are described. More about Conditional role dependencies.
main.yml list of roles (and related parameters) to play before playing role1
main.yml tasks of role1
templates Files or snippets that will be used to generate files on the slaves via the templating engine (is it supposed to mimic the destination file tree ?)
vars additional variables ? Why not in '../defaults/main.yml' ? (read more about variable precedence)
foo.yml ?
bar.yml ?
role2 another role, with a similar structure

Ansible playbooks

Definitions :

As seen in the Introduction to Ansible article, Ansible can be used to perform ad-hoc tasks. But it can also execute procedures called playbooks (examples).

Playbooks :
  • are written in YAML.
  • are composed of one or more plays. A play is a list of hosts with associated roles and tasks. A task is, basically speaking, calling an Ansible module, as seen in the Introduction to Ansible article.
Important things about playbooks (source) :
  • When running the playbook, which runs top to bottom, hosts with failed tasks are taken out of the rotation for the entire playbook. If things fail, simply correct the playbook file and rerun.
  • Modules (hence playbooks) are idempotent : if you run them again, they will make only the changes they must in order to bring the system to the desired state. Ansible relies on facts before taking any action; and playbooks must be designed :
    • NOT as a list of actions to do
    • but as the description of a desired state
    For example, if you instruct Ansible to install a package, it will first detect whether this package is already installed or not (during the facts gathering preliminary step), then act accordingly. This makes it very safe to rerun the same playbook multiple times : it won’t change things unless it has to do so.
    except for shell and command modules, where idempotence cannot be guaranteed automatically (details).
  • Each task has a name parameter which is included in the output from running the playbook. This is for humans only and should be as descriptive as possible.
  • It is usually wise to track playbooks in SCM tools such as Git.

My first playbook :

  1. Save this as playbook.yml :
    # this is my 1st playbook
    - hosts: slaves
      - name: test connection
    slaves is the group name defined in the inventory file.
  2. Launch it : ansible-playbook playbook.yml
  3. It may return :
    PLAY [slaves] *****************************************************************
    GATHERING FACTS ***************************************************************
    ok: []
    ok: []
    TASK: [test connection] *******************************************************
    ok: []
    ok: []
    PLAY RECAP ********************************************************************		: ok=2	changed=0	unreachable=0	failed=0		: ok=2	changed=0	unreachable=0	failed=0

A full playbook :

To know what's performed by this playbook, just read the name lines.
# playbook_web.yml

- hosts: one:two

    apacheUser:        'www-data'
    apacheGroup:       'www-data'
    documentRoot:      '/var/www/test/' # final '/' expected
    websiteLocalPath:  '/root/'
    websiteConfigFile: 'test.conf'

  - name: install Apache
    apt: name=apache2 state=present

  - name: disable default Apache website
    shell: a2dissite default

  - name: define Apache FQDN
    shell: echo "ServerName localhost" > /etc/apache2/conf.d/fqdn

  - name: create docRoot
    file: state=directory path={{ documentRoot }} owner={{ apacheUser }} group={{ apacheGroup }}

  - name: deploy website
    copy: src={{ websiteLocalPath }}index.html dest={{ documentRoot }} owner={{ apacheUser }} group={{ apacheGroup }}

  - name: deploy website conf
    copy: src={{ websiteLocalPath }}{{ websiteConfigFile }} dest=/etc/apache2/sites-available/

  - name: enable website
    shell: a2ensite {{ websiteConfigFile }}

  - name: reload Apache
    service: name=apache2 state=reloaded enabled=yes

- hosts:
  connection: local
  - name: check everything is ok on webserver 'one'
    shell: wget -S -O - -Y off

  - name: check everything is ok on webserver 'two'
    shell: wget -S -O - -Y off

How to run actions on the master in a playbook (source) :

Let's say you just deployed a new web server + a web application. Wouldn't it be great if you could run some checks at the end of the playbook, just to make sure everything's responding as expected ? To do so, you'd have to run some commands from the master host : use this code as the last play of your playbook :

- hosts:
  connection: local
  - name: make sure blah is blah.
    shell: 'myCheckCommand'
If myCheckCommand returns a Unix success :

Testing with myCheckCommand being true, execution of this specific play returns :

PLAY [] **************************************************************

GATHERING FACTS ****************************************************************
ok: []

TASK: [make sure blah is blah.] ***********************************************
changed: []

PLAY RECAP *********************************************************************	: ok=2	changed=1	unreachable=0	failed=0
If myCheckCommand returns a Unix failure :

Now with false, output becomes :

PLAY [] **************************************************************

GATHERING FACTS ****************************************************************
ok: []

TASK: [make sure blah is blah.] ***********************************************
failed: [] => {"changed": true, "cmd": "false", "delta": "0:00:00.015746",
	"end": "2014-10-16 15:11:49.153606", "rc": 1, "start": "2014-10-16 15:11:49.137860"}

FATAL: all hosts have already failed -- aborting

PLAY RECAP *********************************************************************
	to retry, use: --limit @/root/fileNameOfMyPlaybook.retry	: ok=1	changed=0	unreachable=0	failed=1

Playbooks with roles (source, role file tree) :

Initial Ansible syntax (early versions) :
- hosts: webservers
    - role_X
    - role_Y
These are processed as static imports.
Updated syntax (for Ansible 2.4+) :
- hosts: webservers
    - import_role:
        name: role_X
    - include_role:
        name: role_Y
You may choose between import_role and include_role considering the static or dynamic import that will be performed.
Old vs new syntax :
"Old" syntax (compact mode) :
- hosts: webservers
  - { role: role_X, myVariable: "42", tags: "tag1, tag2" }
"Old" syntax (verbose mode) :
- hosts: webservers
    - role: role_X
        myVariable: "42"
        - tag1
        - tag2
"New" syntax :
- hosts: webservers
    - import_role:
        name: role_X
        myVariable: "42"
        - tag1
        - tag2

Introduction to Ansible

Usage :

Setup :

Ansible is installed on a master host to rule them all. There's nothing to install on slaves (except SSH keys ).

Ansible master on a Debian 7.6 (source) :
  1. apt-get install python-dev
    • Ansible uses Python 2.7 and has not yet been ported to version 3.x (See "notes" on installation page).
    • Newer versions of Debian and of Ansible (2.2+) are introducing support of Python 3 as a technology preview (Python 3 support by Ansible)
  2. easy_install pip
  3. not necessary /usr/local/bin/pip install paramiko PyYAML jinja2 httplib2
  4. /usr/local/bin/pip install ansible
  5. Enjoy !

Ansible master on a Debian Buster (inspired by) :
  1. as root :
    apt install python3-pip
  2. as a non-root user : setup + activate a Python virtual environment
  3. still as a non-root user, and from within the virtual environment (if present) :
    pip3 install -U ansible
Setup SSH on the master (source) :
  1. Create a new key : ssh-keygen -t rsa will generate the 2048-bit /root/.ssh/id_rsa RSA private key.
  2. Deploy it to the slave(s)
  3. Configure SSH accordingly (/root/.ssh/config) :
    Host slave1
    	user		root
    	IdentityFile	~/.ssh/id_rsa
    Host slave2
    	user		root
    	IdentityFile	~/.ssh/id_rsa
  4. List slave(s) into the inventory file :	# slave1	# slave2
  5. Check communication between master and slave(s) :
    ansible all -m ping -u root | success >> {
    	"changed": false,
    	"ping": "pong"
    } | success >> {
    	"changed": false,
    	"ping": "pong"
  6. It works !!!
  7. Define groups of hosts in the inventory file

Flags :

CLI flags are common to several Ansible commands / tools. See this dedicated article.

Example :

Get information about slaves (source) :

ansible all -m setup
This will output a VERY long list of inventory information (aka facts) about the target(s). To get detailed information on a specific topic, you can apply a filter :
ansible two -m setup -a 'filter=ansible_processor*' | success >> {
	"ansible_facts": {
		"ansible_processor": [
			"Intel(R) Core(TM)2 Duo CPU	 E8400 @ 3.00GHz"
		"ansible_processor_cores": 1,
		"ansible_processor_count": 1,
		"ansible_processor_threads_per_core": 1,
		"ansible_processor_vcpus": 1
	"changed": false

Run shell commands on slaves (source) :

ansible all -a "hostname"
run a basic command on all slaves : | success | rc=0 >>
This is for basic commands (single binary, no options).
ansible one -a "echo $(hostname)"
This is executed on the master because double quotes are interpreted locally : | success | rc=0 >>
ansible one -a 'echo $(hostname)'
This is sent to the right slave but not executed, because shell/subshell commands are not interpreted : | success | rc=0 >>
ansible one -m shell -a 'echo $(hostname)'
Thanks to the shell module, this command is executed as expected : | success | rc=0 >>
ansible all -m shell -a 'echo $(hostname) | grep -e "[a-z]"'
It's possible to run "complex" shell commands now : | success | rc=0 >>
ansibleSlave2 | success | rc=0 >>

File transfer (source) :

Ansible can scp files from the master to its slaves :

ansible all -m copy -a "src=/home/test.txt dest=/home/"

It's possible to rename the file during the copy by specifying a different destination name : ... "src=/home/test.txt dest=/home/otherName"

Manage packages (source) :

Ansible can query its slaves about software using some dedicated packages :

  • apt for Debianoids. This module is part of the default install.
  • yum for Red Hatoids.

Possible values : installed, latest, removed, absent, present.

Make sure the package openssh-server is installed :
ansible all -m apt -a "name=openssh-server state=installed" | success >> {
	"changed": false
} | success >> {
	"changed": false
If the specified package was not already installed, this will install it. The FULL command output (install procedure) will be reported by Ansible.
Make sure the package apache2 is absent :
ansible all -m apt -a "name=apache2 state=absent" | success >> {
	"changed": false
} | success >> {
	"changed": false

Users and groups (source, user module) :

Create a user account for Bob :
ansible one -m user -a "name=bob state=present" | success >> {
	"changed": true,
	"comment": "",
	"createhome": true,
	"group": 1001,
	"home": "/home/bob",
	"name": "bob",
	"shell": "/bin/sh",
	"state": "present",
	"system": false,
	"uid": 1001
And if I run the same command again, whereas Bob's account already exists : | success >> {
	"append": false,
	"changed": false,
	"comment": "",
	"group": 1001,
	"home": "/home/bob",
	"move_home": false,
	"name": "bob",
	"shell": "/bin/sh",
	"state": "present",
	"uid": 1001
Delete Bob's user account :
ansible one -m user -a "name=bob state=absent remove=yes" | success >> {
	"changed": true,
	"force": false,
	"name": "bob",
	"remove": true,
	"state": "absent"
	"stderr": "userdel: bob mail spool (/var/mail/bob) not found\n"
Running the same command again (no user account named Bob anymore) : | success >> {
	"changed": false,
	"name": "bob",
	"state": "absent"
remove=yes instructs Ansible to delete the homedir as well.
remove=no is equivalent to not using the "remove" option at all (defaults to no), and leaves the homedir untouched.
List existing user accounts :
  • ansible one -m shell -a 'less /etc/passwd | cut -d ":" -f 1'
  • ansible one -m shell -a 'sed -r "s/^([^:]+).*/\1/" /etc/passwd'
  • ansible one -m shell -a 'awk -F ":" "{print \$1}" /etc/passwd'
This gets complex because of escaping quotes and some special characters | success | rc=0 >>


Deploying from Git (source) :

ansible webservers -m git -a "repo=git:// dest=/srv/myapp version=HEAD"

Try this with REAL stuff to deploy.

Manage services (sources 1, 2) :

Start a service : ansible all -m service -a "name=ssh state=started" | success >> {
	"changed": false,
	"name": "ssh",
	"state": "started"
} | success >> {
	"changed": false,
	"name": "ssh",
	"state": "started"
Accepted states :
  • started : start service if not running
  • stopped : stop service if running
  • restarted : always restart
  • reloaded : always reload
  • running : ?