free formparameters (shell, command, script, ) accept the
args keyword to provide options (snippet below taken from the shell documentation) :
- name: This command will change the working directory to someDir/ and will only run when someDir/someLog.txt doesn't exist ansible.builtin.shell: someScript.sh >> someLog.txt args: chdir: someDir/ creates: someLog.txtYou may / may not use
args : it depends mostly on how you format / indent your code . Compare :
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: "Play with the 'shell' module"
shell: touch /tmp/test
executable: /bin/bash
returns :
ERROR! Syntax Error while loading YAML.
mapping values are not allowed in this context
The error appears to be in '/playbook.yml': line n refers to the executable: line
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: "Play with the 'shell' module"
shell: touch /tmp/test
args:
executable: /bin/bash
works fine (except a warning suggesting to use the file module + state=touch rather than using touch within shell, which can easily be fixed with warn: no)
args— is not related to Ansible but to the YAML syntax itself. Indeed, code formatted like :
foo: bar baz: foobarsuggests that :
foo: + the next line indented : foo is a dictionarybaz: foobar is a key/value pair of that dictionarybar... well, we don't really know, but it makes the YAML syntax checker unhappy
shell: touch /tmp/test
executable: /bin/bash
And args to the rescue :
shell: touch /tmp/test this is a string variable
args: this is a dict
executable: /bin/bash this is a key/value of that dict
And now, everybody's happy !
args but this works fine too :
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: "Play with the 'shell' module"
shell: nothing after the :, shell is now a dict
cmd: touch /tmp/test regular key/value of that dict introducing the cmd keyword
executable: /bin/bash
---
# ANSIBLE_LOCALHOST_WARNING=false ansible-playbook test.yml
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- debug:
msg: "hello world!"
| Flag | Usage |
|---|---|
| [action] | one of :
|
|
--vault-pass-file passwordFile --vault-password-file passwordFile |
[defaults] vault_password_file = path/to/.vault_password
EDITOR environment variable.- hosts: myServer vars_files: - path/to/encryptedFile.yml
- name: "Load vault data"
include_vars:
file: path/to/encryptedFile.yml
name: vaultData
Using path/to/ansible.cfg as config file
[defaults] # paths inventory = path/to/inventoryFile log_path = ./ansible.log lookup_plugins = path/to/lookup_plugins #remote_tmp = /tmp/${USER}/ansible deprecated ?, details roles_path = path/to/roles vault_password_file = /var/lib/ansible/.vault_password # ansible internal stuff deprecation_warnings = True forks = 20 details gathering = smart host_key_checking = False internal_poll_interval = 0.05 interpreter_python = auto_silent retry_files_enabled = False [ssh_connection] pipelining = True conflicts with privilege escalation (become + sudo). For details, please retries = 5 ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o PreferredAuthentications=password,publickey
By default, the SSH connection to a slave stays open until the task completes, which is fine for short tasks, but causes problems if task duration > SSH timeout
Use case :asynchronous tasks are run "detached" from the SSH connection, but still "blocking" the play.
the play continues while a background task is being executed.
async and poll :register directive is not executed, and anything based on the registered value will fail because the expected variable does not existasync + poll n (n > 0) = avoid connection timeout(source)
async + poll n (n = 0) = background task(source)
fire and forgetmode
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: Job A
shell: for i in 1 2 3 4 5_end; do sleep 1; echo "job A\t$i\t$(date)" >> myTempFile; done
async: 5
poll: 0
- name: Job B
shell: for i in 1 2 3_end; do sleep 1; echo "job B\t$i\t$(date)" >> myTempFile; done
- debug:
msg: "{{lookup('file', 'myTempFile') }}"
ANSIBLE_LOCALHOST_WARNING=false ansible-playbook myPlaybook.yml; sleep 3; cat myTempFile; rm myTempFile
PLAY [127.0.0.1] ************************************************************************************
TASK [Job A] ****************************************************************************************
changed: [127.0.0.1]
TASK [Job B] ****************************************************************************************
changed: [127.0.0.1]
TASK [debug] ****************************************************************************************
ok: [127.0.0.1] => {
"msg": "
job A 1 Wed 27 May 2020 12:13:31 PM CEST file contents at the end of the playbook
job B 1 Wed 27 May 2020 12:13:31 PM CEST
job A 2 Wed 27 May 2020 12:13:32 PM CEST
job B 2 Wed 27 May 2020 12:13:32 PM CEST
job A 3 Wed 27 May 2020 12:13:33 PM CEST
job B 3_end Wed 27 May 2020 12:13:33 PM CEST
"}
PLAY RECAP ******************************************************************************************
127.0.0.1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
job A 1 Wed 27 May 2020 12:13:31 PM CEST file contents after the end of the playbook + pause
job B 1 Wed 27 May 2020 12:13:31 PM CEST
job A 2 Wed 27 May 2020 12:13:32 PM CEST
job B 2 Wed 27 May 2020 12:13:32 PM CEST
job A 3 Wed 27 May 2020 12:13:33 PM CEST
job B 3_end Wed 27 May 2020 12:13:33 PM CEST
job A 4 Wed 27 May 2020 12:13:34 PM CEST
job A 5_end Wed 27 May 2020 12:13:35 PM CEST
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: Init. things
shell: "[ -f myTempFile ] && rm myTempFile || :"
- name: Job A
shell: for i in 1 2 3 4 5_end; do sleep 1; echo "job A\t$i\t$(date)" >> myTempFile; done; echo 'job A OK'
async: 20
poll: 0
register: jobA
- name: Job B
shell: for i in 1 2 3_end; do sleep 1; echo "job B\t$i\t$(date)" >> myTempFile; done; echo 'job B OK'
- name: Wait for 'Job A' to end
async_status:
jid: '{{ jobA.ansible_job_id }}'
register: jobAStatus
until: jobAStatus.finished
retries: 10 number of attempts
delay: 1 seconds between attempts
- debug: when reaching this point, the task named "Job A" is finished
var: jobA
- debug:
var: jobAStatus
- debug:
msg: "{{lookup('file', jobA.results_file) }}"
ANSIBLE_LOCALHOST_WARNING=false ansible-playbook myPlaybook.yml
PLAY [127.0.0.1] ************************************************************************************
TASK [Init. things] *********************************************************************************
changed: [127.0.0.1]
TASK [Job A] ****************************************************************************************
changed: [127.0.0.1]
TASK [Job B] ****************************************************************************************
changed: [127.0.0.1]
TASK [Wait for 'Job A' to end] **********************************************************************
FAILED - RETRYING: Wait for 'Job A' to end (10 retries left).
FAILED - RETRYING: Wait for 'Job A' to end (9 retries left).
changed: [127.0.0.1]
TASK [debug] ****************************************************************************************
ok: [127.0.0.1] => {
"jobA": { the variable I registered to use async_status
"ansible_job_id": "648023500521.29516",
"changed": true,
"failed": false, the background task exit status
"finished": 0, I expected this to be 1 ()
"results_file": "/home/bob/.ansible_async/648023500521.29516",
"started": 1
}
}
TASK [debug] ****************************************************************************************
ok: [127.0.0.1] => {
"jobAStatus": { the variable I registered to use until with async_status
"ansible_job_id": "648023500521.29516",
"attempts": 3,
"changed": true,
"cmd": "for i in 1 2 3 4 5_end; do sleep 1; echo \"job A\\t$i\\t$(date)\" >> myTempFile; done; echo 'job A OK'",
"delta": "0:00:05.016975",
"end": "2020-05-27 13:41:46.752286",
"failed": false,
"finished": 1,
"rc": 0,
"start": "2020-05-27 13:41:41.735311",
"stderr": "",
"stderr_lines": [],
"stdout": "job A OK",
"stdout_lines": [
"job A OK"
]
}
}
TASK [debug] ****************************************************************************************
ok: [127.0.0.1] => {
"msg": { contents of the job's result_file
"changed": true,
"cmd": "for i in 1 2 3 4 5_end; do sleep 1; echo \"job A\\t$i\\t$(date)\" >> myTempFile; done; echo 'job A OK'",
"delta": "0:00:05.016975",
"end": "2020-05-27 13:41:46.752286",
"invocation": {
"module_args": {
"_raw_params": "for i in 1 2 3 4 5_end; do sleep 1; echo \"job A\\t$i\\t$(date)\" >> myTempFile; done; echo 'job A OK'",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2020-05-27 13:41:41.735311",
"stderr": "",
"stdout": "job A OK"
}
}
PLAY RECAP ******************************************************************************************
127.0.0.1 : ok=7 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
slave1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
slave2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
slave3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
import* directives (import_tasks, import_playbook, ...)include* directives (include_tasks, include_role, ...)These flags are common to the ansible and ansible-playbook commands.
| Flag | Usage |
|---|---|
| -a 'arguments' --args='arguments' |
Pass arguments to the module specified with -m : -a 'arg1Name=arg1Value arg2Name=arg2Value'
|
| -b --become |
Run operations with become |
| -C --check |
Do not make any changes on the remote system, but test resources to see what might have changed.
This can not scan all possible resource types and is only a simulation.
|
| -D --diff |
|
| -e --extra-vars |
Specify additional variables :
|
| -f n --forks=n |
Launch up to n parallel processes (forks) |
| -i inventory --inventory=inventory |
inventory can be :
|
| -k --ask-pass |
Prompt for the connection password, if it is needed for the transport used (e.g. SSH). details |
| -K --ask-become-pass |
Ask for privilege escalation password (i.e. sudo password). details |
| -l pattern --limit=pattern |
limit the playbook execution to slaves matching pattern. Let's imagine you have web1.example.com, web2.example.com and sql.example.com slaves and want to alter the web servers only. pattern could be specified :
|
| -m moduleName --module-name=moduleName |
Execute module moduleName (module index) |
| -t tags --tags tags |
Only run plays and tasks tagged with these tags. See also : Using tags to slice playbooks |
| -u remoteUser --user remoteUser |
connect to ansibleSlave as remoteUser |
| -v --verbose |
verbose mode, -vvv to increase verbosity, -vvvv to enable connection debugging |
- hosts: all
roles:
- roleA
- roleB
- roleC
- hosts: sql
roles:
- roleD
- roleE
tags:
- sqlOnly
- name: Debug
ansible.builtin.debug:
msg: '{{ myVariable }}'
tags: [ never, debug ]
This task runs when you specifically request the never or debug tag (i.e. --tags never or --tags debug)
import_... directive)| When tags is applied to... | it affects the object having the tag | ...and its children too |
|---|---|---|
| a play | Yes | Yes |
anything that is import_*ed |
Yes | Yes |
anything that is include_*ed |
Yes | No |
Everything below also applies to ad-hoc commands launched with ansible.
Host ansibleSlave User stuart IdentityFile ~/.ssh/id_rsa
| for the SSH part | for the sudo part | |
|---|---|---|
| playbook using become |
|
|
| ad-hoc command or playbook not using become |
Commands that require escalated privileges need not specifying sudo : this would be redundant with -b :
ansibleSlave | SUCCESS | rc=0 >> root
ansibleSlave | FAILED | rc=1 >> sudo: no tty present and no askpass program specified
TASK [setup] *******************************************************************
fatal: [ansibleSlave]: UNREACHABLE! => {"changed": false, "msg": "Authentication failure.", "unreachable": true}
check the points below.
User directive can send a different login name : grep -i user ~/.ssh/configFor instance, you may have typed :
expecting the SSH connection to be opened as $USER (i.e. ssh $USER@ansibleSlave), then sudo to run the playbook tasks.Check it by making Ansible verbose :
<10.27.25.1> ESTABLISH SSH CONNECTION FOR USER: root
Turned out that myPlaybook.yml looks like :
- hosts: all remote_user: root GOTCHA!!! roles:... which explains _WHY_.
This is a BAD10100 practice which effectively forces Ansible to "ssh @ansibleSlave" (this should _NOT_ be possible).
remote_user: root : for tasks to run on virtual machines that just have been spawned : they have no local user accounts, no sudo / domain / LDAP configured. But I guess there may be cleaner ways than hardcoding this...To workaround this :
ansible-playbook -i inventory --diff --check myPlaybook.yml