ansible list files in a directory - list

Can somone explain to me why this doesn't work? I want to get a list of files within a directory and use it as an input for the loop.
---
tasks:
- set_fact:
capabilities: []
- name: find CE_Base capabilities
find:
paths: /opt/netsec/ansible/orchestration/capabilities/CE_BASE
patterns: '*.yml'
register: CE_BASE_capabilities
- name: debug_files
debug:
msg: "{{ item.path }}"
with_items: "{{ CE_BASE_capabilities.files }}"
- set_fact:
thispath: "{{ item.path }}"
capabilities: "{{ capabilities + [ thispath ] }}"
with_items: "{{ CE_BASE_capabilities.files }}"
- name: Include CE_BASE
include_tasks: /opt/netsec/ansible/orchestration/process_capabilities_CE_BASE.yml
loop: "{{ capabilities }}"
Edit:
This code is attempting to create a list called capabilties, which contatins a list of files in a particular directory.
When i ran this code without trying to get the files automatically, it looked like this.
- hosts: localhost
vars:
CE_BASE_capabilities:
- '/opt/netsec/ansible/orchestration/capabilities/CE_BASE/CE_BASE_1.yml'
- '/opt/netsec/ansible/orchestration/capabilities/CE_BASE/CE_BASE_2.yml'
- name: Include CE_BASE
include_tasks: /opt/netsec/ansible/orchestration/process_capabilities_CE_BASE.yml
loop: "{{ CE_BASE_capabilities }}"

Don't define thispath as a fact but as a local vars in the set_fact task. Beside that, you don't need to init capabilities if you use the default filter.
- vars:
thispath: "{{ item.path }}"
set_fact:
capabilities: "{{ capabilities | default([]) + [ thispath ] }}"
with_items: "{{ CE_BASE_capabilities.files }}"
Moreover, you don't even need to loop. You can extract the info directly from the existing result:
- set_fact:
capabilities: "{{ CE_BASE_capabilities.files | map(attribute='path') | list }}"

Related

Using regex_search to extract a string between a string and a whitespace

I would like to extract the version number 1.0.1 from the following string in Ansible, I tried using regex_serach to extra the string between ide- and a whitespace, but that's what I end up getting ide-1.0.1 ; argv[]=/home/bin/ide-1.0.1 start. I tried \s instead of \\s and it doesn't work. How should I fix the regex pattern? Any help would be appreciated!
ExecStart={ path=/home/bin/ide-1.0.1 ; argv[]=/home/bin/ide-1.0.1 start }
- name: Check if ide is active
command: systemctl show ide.service --property=ExecStart
register: version_check
ignore_errors: yes
- name: Set fact
set_fact:
version: "{{ version_check.stdout | regex_search('ide-(.*)\\s'}}"
- name: Debug version
debug:{{ version }}"
Use regex_replace, e.g.
- set_fact:
version: "{{ _path.split('-').1 }}"
vars:
_regex: '^(.*)=(.*)=(.*);(.*)$'
_replace: '\3'
_path: "{{ version_check.stdout|regex_replace(_regex, _replace)|trim }}"
gives
version: 1.0.1
Parsing semi-structured text with Ansible is available, e.g.
- ansible.utils.cli_parse:
text: "{{ version_check.stdout }}"
parser:
name: ansible.netcommon.native
template_path: templates/property.yaml
set_fact: property
and the template
shell> cat templates/property.yaml
---
- example: 'ExecStart={ path=/home/bin/ide-1.0.1 ; argv[]=/home/bin/ide-1.0.1 start }'
getval: '(?P<property>\S+)={\s*(?P<key1>\S+)=(?P<val1>\S+)\s*;\s*(?P<key2>\S+)=(?P<val2>\S+)\s*(?P<state>\S+)\s*}'
result:
"{{ property }}":
"{{ key1 }}": "{{ val1 }}"
"{{ key2 }}": "{{ val2 }}"
state: "{{ state }}"
give
property:
ExecStart:
argv[]: /home/bin/ide-1.0.1
path: /home/bin/ide-1.0.1
state: start
Then parse the version, e.g.
- set_fact:
version: "{{ (property.ExecStart.path|basename).split('-').1 }}"
gives
version: 1.0.1

Ansible merge list with dict

I have below playbook in which I search for all vars starting with static_routes__ and then merge them.
---
- hosts: localhost
gather_facts: no
vars:
static_routes__host:
management:
- address: '0.0.0.0/0'
next_hop: '192.168.0.1'
static_routes__lab:
management:
- address: '1.1.1.1/32'
next_hop: '192.168.0.1'
static_routes__test:
test:
- address: '8.8.8.8/32'
next_hop: '192.168.2.1'
tasks:
- set_fact:
static_routes: "{{ static_routes | default({}) | combine(lookup('vars', item, default={}), recursive=True, list_merge='append') }}"
loop: "{{ query('varnames', 'static_routes__') }}"
- name: Output static_routes
debug:
var: static_routes
The above will result in:
TASK [Output static_routes] ***************************************************************************************************************************************************************
ok: [localhost] => {
"static_routes": {
"management": [
{
"address": "0.0.0.0/0",
"next_hop": "192.168.0.1"
},
{
"address": "1.1.1.1/32",
"next_hop": "192.168.0.1"
}
],
"test": [
{
"address": "8.8.8.8/32",
"next_hop": "192.168.2.1"
}
]
}
}
However the merge_list_ is only available in Ansible version > 2.9, which is currently not available to me due to company policies. I'm looking for a way to replicate above output in Ansible version =< 2.9.
With the below task I'm able to sort of reproduce it but it only allows one list item.
- set_fact:
static_routes: "{{ static_routes | default({}) | combine({vrf: route | default([]) }) }}"
loop: "{{ query('varnames', 'static_routes__') }}"
vars:
vrf: "{{ lookup('dict', lookup('vars', item)).key }}"
route: "{{ lookup('dict', lookup('vars', item)).value }}"
subnet: "{{ lookup('dict', lookup('vars', item)).value.0.address }}"
next_hop: "{{ lookup('dict', lookup('vars', item)).value.0.next_hop }}"
- name: Output static_routes
debug:
var: static_routes
Found the solution:
- set_fact:
static_routes_list: "{{ static_routes_list | default({}) | combine({item: lookup('vars', item)}) }}"
loop: "{{ query('varnames', 'static_routes__') }}"
- set_fact:
static_routes: "{{ static_routes|
default({})|
combine({item.0: item.1|json_query('[].value')|flatten | unique})
}}"
loop: "{{ static_routes_list|
dict2items|
json_query('[*].value')|
map('dict2items')|list|flatten|
groupby('key')
}}"
This one seems simpler
- set_fact:
_list: "{{ _list|default([]) + [lookup('vars', item)] }}"
loop: "{{ query('varnames', 'static_routes__') }}"
- set_fact:
static_routes: "{{ static_routes|default({})|
combine({item.0: item.1|json_query('[].value')|flatten}) }}"
loop: "{{ _list|map('dict2items')|map('first')|groupby('key') }}"
gives
static_routes:
management:
- address: 0.0.0.0/0
next_hop: 192.168.0.1
- address: 1.1.1.1/32
next_hop: 192.168.0.1
test:
- address: 8.8.8.8/32
next_hop: 192.168.2.1

Is it possible to loop into two different lists in the same playbook (Ansible)?

I'm writing a Playbook Ansible and I want to loop into two different lists.
I now that I can use with_items to loop in a list, but can I use with_items twice in the same playbook?
Here is what I want to do:
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item }}"
deploy: false
fabric: "{{ item }}"
state: merged
with_items:
- "{{ attachs }}"
"{{ fabric }}"
register: networks
So for the first call, I want to use the playbook with fabric[0] and attachs[0].
For the second call, I want to use the playbook with fabric[1] and attachs[1].
And so on...
Can someone help me please?
What you are looking to achieve is what was with_together and that is, now, recommanded to achieve with the zip filter.
So: loop: "{{ attachs | zip(fabric) | list }}".
Where the element of the first list (attachs) would be item.0 and the element of the second list (fabric) would be item.1.
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item.0 }}"
deploy: false
fabric: "{{ item.1 }}"
state: merged
loop: "{{ attachs | zip(fabric) | list }}"
register: networks

set_facts with dict as argument of a loop

I'd like to obtain the list of bridged interfaces grouped by master like this:
brv100:
- vnet0
- eth0
brv101:
- vnet1
- eth1
I want to use native json output from the shell commands.
The only thing I managed to do is to get a predefined number of interfaces like this:
- hosts: localhost
gather_facts: no
tasks:
- shell:
cmd: ip -details -pretty -json link show type bridge
register: list_bridges
- set_fact:
bridges: "{{ list_bridges.stdout }}"
- debug:
msg: "{{ bridges | map(attribute='ifname') | list}}"
- name: get json
shell:
cmd: ip -details -pretty -json link show master "{{ifname}}"
with_items: "{{bridges | map(attribute='ifname') | list}}"
loop_control:
loop_var: ifname
register: list_interfaces
- set_fact:
interfaces: "{{ list_interfaces.results | map(attribute='stdout') | list }}"
- set_fact:
toto: "{{interfaces.1}} + {{interfaces.2}}"
- debug:
msg: "{{toto | map(attribute='ifname')|list}}"
Now if I want to do the same with a loop :
- set_fact:
toto: " {{item|default([])}}+ {{ item |default([])}}.{{idx}} "
loop: "{{interfaces}}"
loop_control:
label: "{{item}}"
index_var: idx
- debug: var=toto
The result doesn't seem to be a list of list, but a list of strings and I can't extract the 'ifname' values with a simple debug
- debug:
msg: "{{toto | map(attribute='ifname')|list}}"
What am I supposed to do so as to get benefit of the json native output and get simple list of bridged interfaces (like brctl show was used to do)?
The lists of bridged interfaces grouped by the master are available in ansible_facts. The task below sets the dictionary of the bridges and bridged interfaces
- set_fact:
bridges: "{{ dict(ansible_facts|
dict2items|
json_query('[?value.type == `bridge`].[key, value.interfaces]')) }}"
Q: "Manage to get the same result manipulating JSON data."
A: The output of the ip -json ... command is JSON formated string which must be converted to JSON dictionary in Ansible by the from_yaml filter (JSON is a subset of YAML). For example, the tasks below give the same result
vars:
my_debug: false
tasks:
- name: Get bridges names
command: "ip -details -json link show type bridge"
register: list_bridges
- set_fact:
bridges: "{{ list_bridges.stdout|
from_yaml|
map(attribute='ifname')|
list }}"
- debug:
var: bridges
when: my_debug
- name: Get bridges interfaces
command: "ip -details -json link show master {{ item }}"
loop: "{{ bridges }}"
register: list_interfaces
- set_fact:
bridges_interfaces: "{{ list_interfaces.results|
json_query('[].stdout')|
map('from_yaml')|
list }}"
- debug:
msg: "{{ msg.split('\n') }}"
vars:
msg: "{{ item|to_nice_yaml }}"
loop: "{{ bridges_interfaces }}"
loop_control:
label: "{{ item|json_query('[].ifname') }}"
when: my_debug
- name: Set dictionary of bridges
set_fact:
bridges_dict: "{{ bridges_dict|
default({})|
combine({item.0: item.1|json_query('[].ifname')}) }}"
loop: "{{ bridges|zip(bridges_interfaces)|list }}"
loop_control:
label: "{{ item.1|json_query('[].ifname') }}"
- debug:
var: bridges_dict
Template to write the bridges to a file
{% for k,v in bridges_dict.items() %}
{{ k }}:
{% if v is iterable %}
{% for i in v %}
- {{ i }}
{% endfor %}
{% endif %}
{% endfor %}
- name: Write the bridges to file
template:
src: bridges.txt.j2
dest: bridges.txt
The file bridges.txt will be created in the remote host running the task.

Ansible running one command out of many locally in a loop

Background,
I'm trying to create a loop that iterates over hash read from qa.yml file and for every user in the list it tries to find a file on the local server (public key), once the file is found it creates the user on remote machine and copies the public key to authorized_key on remote machine.
I'm trying to implement it in a way of iteration, so in order to update the keys or add more users keys I need to change the .yml list and place the public key file in the proper place. However I can't get the local_action + find working.
---
- hosts: tag_Ansible_CLOUD_QA
vars_files:
- ../users/qa.yml
- ../users/groups.yml
remote_user: ec2-user
sudo: yes
tasks:
- name: Create groups
group: name="{{ item.key }}" state=present
with_dict: "{{ user_groups }}"
- name: Create remote users QA
user: name="{{ item.key }}" comment="user" group=users groups="qa"
with_dict: "{{ qa_users }}"
- name: Erase previous authorized keys QA
shell: rm -rf /home/"{{ item.key }}"/.ssh/authorized_keys
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users QA
local_action:
find: paths="{{'/opt/pubkeys/2016/q2/'}}" patterns="{{ item.key }}"
register: result
authorized_key: user="{{ item.key }}" key="{{ lookup('file', result) }}"
with_dict: "{{ qa_users }}"
Hash:
qa_users:
user1:
name: User 1
user2:
name: User 2
You're cramming two tasks into a single task item in that final task so Ansible isn't going to like that.
Splitting the task properly should work:
- name: Find keys
local_action: find paths="{{'/opt/pubkeys/2016/q2/'}}" patterns="{{ item.key }}"
register: result
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users QA
authorized_key: user="{{ item.0.key }}" key="{{ lookup('file', item.1.stdout) }}"
with_together:
- "{{ qa_users }}"
- result
The second task then loops over the dictionary and the result from the previous task using a with_together loop which advances through the two data structures in step.
However, this looks like a less than ideal way to solve your problem.
If you look at what your tasks here are trying to do you could replace it more simply with something like this:
- name: Add public keys to remote users QA
authorized_key: user="{{ item.key }}" key="{{ lookup('file', '/opt/pubkeys/2016/q2/' + item.key ) }}"
with_dict:
- "{{ qa_users }}"
You can also remove the thid task where you cleared down the user's previous keys by simply using the exclusive parameter of the authorized_keys module:
- name: Add public keys to remote users QA
authorized_key: user="{{ item.key }}" key="{{ lookup('file', '/opt/pubkeys/2016/q2/' + item.key ) }}" exclusive=yes
with_dict:
- "{{ qa_users }}"
Also, it might be a case of you trying to simplify things in an odd way for the question but your data structures you are using are less than ideal right now so I'd take a look at that if that's really what they look like.
Thank you #ydaetskcoR for sharing the right approach, the following solution did the dynamic public key distribution for me, when files residing on local machine and provisioned on remote EC2 machines:
---
- hosts: tag_Ansible_CLOUD_QA
vars_files:
- ../users/groups.yml
- ../users/qa.yml
remote_user: ec2-user
become: yes
become_method: sudo
tasks:
- name: Find user matching key files
become_user: jenkins
local_action: find paths="{{'/opt/pubkeys/2016/q1/'}}" patterns="{{ '*' + item.key + '*' }}"
register: pub_key_files
with_dict: "{{ qa_users }}"
- name: Create groups
group: name="{{ item.key }}" state=present
with_dict: "{{ user_groups }}"
- name: Allow test users to have passwordless sudo
lineinfile: "dest=/etc/sudoers state=present regexp='^%{{ item.key }} ALL=.*ALL.* NOPASSWD: ALL' line='%{{ item.key }} ALL=(ALL) NOPASSWD: ALL'"
with_dict: "{{ user_groups }}"
- name: Create remote users qa
user: name="{{ item.key }}" comment="user" group=users groups="qa"
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users qa
#debug: "msg={{ 'User:' + item.item.key + ' KeyFile:' + item.files.0.path }}"
authorized_key: user="{{ item.item.key }}" key="{{ lookup('file', item.files.0.path) }}" exclusive=yes
with_items: "{{ pub_key_files.results }}"
This is the command line to get dynamic inventory based on EC2 Tags:
ansible-playbook -i inventory/ec2.py --private-key <path to your key file> --extra-vars '{"QUATER":"q1"}' credentials/distribute-public-key.yml