Ansible trim extension name - list

I want to get list of name files without full path and without extension name into 2 differents variable, I succeed for the first one, but not on second one.
I have a variable found_files which contains list files on folder
- name: Get only file names
set_fact:
file_names: "{{ found_files['files'] | map(attribute='path') | map('basename') | list }}"
- name: Get name without extension
set_fact:
file_names_without_extension: "{{ item | splitext | first | splitext | first }}" with_list: "{{ file_names }}"
the first task, I get the correct name of files without path,
but the second task I get only the first file withtout extension file, and don't succeed to get all file name without extension name.

Simply map splitext and first
- set_fact:
file_names_without_extension: "{{ file_names|
map('splitext')|
map('first')|
list }}"
vars:
file_names:
- test1.txt
- test2.txt
- test3.txt
gives
file_names_without_extension:
- test1
- test2
- test3

Related

Looking to see if a key from one set of imported variables matches another so it's value can be procured

I have roughly formatted yml files with key/value pairs in them. I then imported the values of both of these files successfully into a running playbook using the include_vars module.
Now, I want to be able to compare the value of the key/value pair from file/list 1, to all of the keys of file/list 2. Then finally when there is a match, print and preferably save/register the value of the matching key from file/list 2.
Essentially I am comparing a machine name to an IP list to try to grab the IP the machine needs out of that list. The name is "dynamic" and is different each time the playbook is run, as file/list 1 is always dynamically populated on each run.
Examples:
file/list 1 contents
machine_serial: m60
s_iteration: a
site_name: dud
t_number: '001'
file/list 2 contents
m51: 10.2.5.201
m52: 10.2.5.202
m53: 10.2.5.203
m54: 10.2.5.204
m55: 10.2.5.205
m56: 10.2.5.206
m57: 10.2.5.207
m58: 10.2.5.208
m59: 10.2.5.209
m60: 10.2.5.210
m61: 10.2.5.211
In a nutshell, I want to be able to get the file/list 1 ct_machine_serial key who's value is currently: m60 to be able to find it's key match in file/list 2, and then print and/or preferably register it's value of 10.2.5.210.
What I've tried so far:
Playbook:
- name: IP gleaning comparison.
hosts: localhost
remote_user: ansible
become: yes
become_method: sudo
vars:
ansible_ssh_pipelining: yes
tasks:
- name: Try to do a variable import of the file1 file.
include_vars:
file: ~/active_ct-scanner-vars.yml
name: ctfile1_vars
become: no
- name: Try to do an import of file2 file for lookup comparison to get an IP match.
include_vars:
file: ~/machine-ip-conversion.yml
name: ip_vars
become: no
- name: Best, but failing attempt to get the value of the match-up IP.
debug:
msg: "{{ item }}"
when: ctfile1_vars.machine_serial == ip_vars
with_items:
- "{{ ip_vars }}"
Every task except the final one works perfectly.
My failed output final task:
TASK [Best, but failing attempt to get the value of the match-up IP.] ***********************************************************************************
skipping: [localhost] => (item={'m51': '10.200.5.201', 'm52': '10.200.5.202', 'm53': '10.200.5.203', 'm54': '10.200.5.204', 'm55': '10.200.5.205', 'm56': '10.200.5.206', 'm57': '10.200.5.207', 'm58': '10.200.5.208', 'm59': '10.200.5.209', 'm60': '10.200.5.210', 'm61': '10.200.5.211'})
skipping: [localhost]
What I hoped for hasn't happened, it simply skips the task, and doesn't iterate over the list like I was hoping, so there must be a problem somewhere. Hopefully there is an easy solution to this I just missed. What could be the correct answer?
Given the files
shell> cat active_ct-scanner-vars.yml
machine_serial: m60
s_iteration: a
site_name: dud
t_number: '001'
shell> cat machine-ip-conversion.yml
m58: 10.2.5.208
m59: 10.2.5.209
m60: 10.2.5.210
m61: 10.2.5.211
Read the files
- include_vars:
file: active_ct-scanner-vars.yml
name: ctfile1_vars
- include_vars:
file: machine-ip-conversion.yml
name: ip_vars
Q: "Compare the machine name to an IP list and grab the IP."
A: Both variables ip_vars and ctfile1_vars are dictionaries. Use ctfile1_vars.machine_serial as index in ip_vars
match_up_IP: "{{ ip_vars[ctfile1_vars.machine_serial] }}"
gives
match_up_IP: 10.2.5.210
Example of a complete playbook for testing
- hosts: localhost
gather_facts: false
vars:
match_up_IP: "{{ ip_vars[ctfile1_vars.machine_serial] }}"
tasks:
- include_vars:
file: active_ct-scanner-vars.yml
name: ctfile1_vars
- include_vars:
file: machine-ip-conversion.yml
name: ip_vars
- debug:
var: match_up_IP

Remove a given hostname+URL from a line containing 3, separated by commas, in any position, using Ansible playbook

Scenario: I have a configuration file for etcd, and one of the nodes in the cluster has failed. I know the name of the failed node, but not its IP address nor the names of the other two hosts in the cluster. I need to write an Ansible play to remove the failed node from a line in the etcd config file, (presumably) using the Ansible builtin replace which (I believe) uses Python as its RE engine.
I have managed to create something that works, with one caveat: If the failed host is the third one listed, the RE leaves a dangling comma at the end of the line. I'm hoping that someone smarter than I am can edit or replace my regex to cover all three positional cases.
The hostname of the failed node is passed into the playbook as a variable, so {{ failed_node }} would be substituted for the actual hostname of the failed node, let's call it app-failedhost-eeeeeeeeee.node.consul in my example.
Given a regex
((?:^ETCD_INITIAL_CLUSTER=)(?:[a-z0-9-.]{15,}=https:\/\/[0-9]+(?:\.[0-9]+){3}:2380,?){0,2})(,?{{ failed_node }}=https:\/\/[0-9]+(?:[.][0-9]+){3}:2380,?)((?:,?[a-z0-9-.]{15,}=https:\/\/[0-9]+(?:\.[0-9]+){3}:2380,?){0,2})
which when being actually run would be (if failed_node=app-failedhost-eeeeeeeeee.node.consul)
((?:^ETCD_INITIAL_CLUSTER=)(?:[a-z0-9-.]{15,}=https:\/\/[0-9]+(?:\.[0-9]+){3}:2380,?){0,2})(,?app-failedhost-eeeeeeeeee.node.consul=https:\/\/[0-9]+(?:[.][0-9]+){3}:2380,?)((?:,?[a-z0-9-.]{15,}=https:\/\/[0-9]+(?:\.[0-9]+){3}:2380,?){0,2})
if run against one of these lines,
ETCD_INITIAL_CLUSTER=app-failedhost-eeeeeeeeee.node.consul=https://192.168.18.39:2380,app-instance-de24a5c1aefb.node.consul=https://192.168.18.92:2380,app-instance-6cc297ab3cc.node.consul=https://192.168.18.11:2380
ETCD_INITIAL_CLUSTER=app-instance-de24a5c1aefb.node.consul=https://192.168.18.92:2380,app-failedhost-eeeeeeeeee.node.consul=https://192.168.18.39:2380,app-instance-6cc297ab3cc.node.consul=https://192.168.18.11:2380
ETCD_INITIAL_CLUSTER=app-instance-de24a5c1aefb.node.consul=https://192.168.18.92:2380,app-instance-6cc297ab3cc.node.consul=https://192.168.18.11:2380,app-failedhost-eeeeeeeeee.node.consul=https://192.168.18.39:2380
(which if you simplify, is ETCD_INITIAL_CLUSTER= followed by three pairs of values, comma-separated, FQDN=https://[IP address]:2380 with the failed node in position 0, 1, or 2)
and the replace: is '\1\3', you get
ETCD_INITIAL_CLUSTER=app-instance-de24a5c1aefb.node.consul=https://192.168.18.92:2380,app-instance-6cc297ab3cc.node.consul=https://192.168.18.11:2380
ETCD_INITIAL_CLUSTER=app-instance-de24a5c1aefb.node.consul=https://192.168.18.92:2380,app-instance-6cc297ab3cc.node.consul=https://192.168.18.11:2380
ETCD_INITIAL_CLUSTER=app-instance-de24a5c1aefb.node.consul=https://192.168.18.92:2380,app-instance-6cc297ab3cc.node.consul=https://192.168.18.11:2380,
That's correct for the first two cases (failed node in first or second position) but if the failed node is in the third (last) position as in the third example line, then the final comma is left behind.
https://regex101.com/r/f635Wv/1 has the same examples as above.
Playbook, in case the full situation is not clear from the regex above, called node-cleanup.yaml, is called with ansible-playbook node-cleanup.yaml --extra-vars "failed_node=app-failedhost-eeeeeeeeee.node.consul" in the above examples:
---
- name: Clean up failed etcd node
hosts: etcd
become: true
tasks:
- name: Remove failed host from ETCD_INITIAL_CLUSTER line
replace:
path: "/etc/etcd/etcd.conf"
regexp: '((?:^ETCD_INITIAL_CLUSTER=)(?:[a-z0-9-.]{15,}=https:\/\/[0-9]+(?:\.[0-9]+){3}:2380,?){0,2})(,?{{ failed_node }}=https:\/\/[0-9]+(?:[.][0-9]+){3}:2380,?)((?:,?[a-z0-9-.]{15,}=https:\/\/[0-9]+(?:\.[0-9]+){3}:2380,?){0,2})'
replace: '\1\3'
but I think that part is fine, I just need some help with that beast of a regex.
If the line in the file before is simplified as
ETCD_INITIAL_CLUSTER=host1=IP,host2=IP,host3=IP
and I pass in “host3” for {{ failed_node }}, then I want
ETCD_INITIAL_CLUSTER=host1=IP,host2=IP
to come out, but what I actually get is
ETCD_INITIAL_CLUSTER=host1=IP,host2=IP,
(note the trailing comma)
Given the file
shell> cat test.conf
ETCD_INITIAL_CLUSTER=host1=IP,host2=IP,host3=IP
and the variable
failed_node: host3
Get the line from the configuration file. There are many options depending on the file is local or remote, e.g.
- shell: cat test.conf | grep ETCD_INITIAL_CLUSTER
register: result
check_mode: false
- set_fact:
eic: "{{ result.stdout }}"
gives
eic: ETCD_INITIAL_CLUSTER=host1=IP,host2=IP,host3=IP
Split the key/value pair and create a new value by rejecting the failed node
- set_fact:
_value: "{{ eic|regex_replace('^(.*?)=(.*)$', '\\2') }}"
_key: "{{ eic|regex_replace('^(.*?)=(.*)$', '\\1') }}"
- set_fact:
_new_value: "{{ _hip|reject('search', failed_node) }}"
vars:
_hip: "{{ _value.split(',') }}"
gives
_new_value:
- host1=IP
- host2=IP
Now update the key in the configuration file, e.g.
- replace:
path: test.conf
regexp: '{{ _key }}\s*=\s*{{ _value }}'
replace: '{{ _key }}={{ _new_value|join(",") }}'
running the playbook in the check mode (--check --diff) gives
+++ after: test.conf
## -1 +1 ##
-ETCD_INITIAL_CLUSTER=host1=IP,host2=IP,host3=IP
+ETCD_INITIAL_CLUSTER=host1=IP,host2=IP
The procedure can be optimized. The tasks below do the same job
- shell: cat test.conf | grep ETCD_INITIAL_CLUSTER
register: result
check_mode: false
- replace:
path: test.conf
regexp: '{{ _key }}\s*=\s*{{ _value }}'
replace: '{{ _key }}={{ _new_value|join(",") }}'
vars:
_key: "{{ result.stdout|regex_replace('^(.*?)=(.*)$', '\\1') }}"
_value: "{{ result.stdout|regex_replace('^(.*?)=(.*)$', '\\2') }}"
_new_value: "{{ _value.split(',')|reject('search', failed_node) }}"
There are other options on how to get the line from the configuration file. For example, if the file is local, the Ansible way would be lookup plugin, e.g.
- debug:
msg: "{{ lookup('ini', 'ETCD_INITIAL_CLUSTER type=properties file=test.conf') }}"
gives the value of ETCD_INITIAL_CLUSTER
msg: host1=IP,host2=IP,host3=IP
This would further reduce the job to a single task
- replace:
path: test.conf
regexp: '{{ _key }}\s*=\s*{{ _value }}'
replace: '{{ _key }}={{ _new_value|join(",") }}'
vars:
_key: ETCD_INITIAL_CLUSTER
_value: "{{ lookup('ini', _key ~ ' type=properties file=test.conf') }}"
_new_value: "{{ _value.split(',')|reject('search', failed_node) }}"

Ansible - Read a file, extract specific line, extract a column and assign to variable

I need some help with extracting a specific line from a file and then extracting a column, assign it to a variable and then use that variable in the next task.
I have the file with this format on the confluent broker server
Save the key. It cannot be retrieved later.
+------------+----------------------------------------------+
| Enc Key | omykeyvaluecontinuousstringgoeshereandmakelong= |
+------------+----------------------------------------------+
I am trying to write Ansible task that will read the third line and then extract the key into a variable which I need to export as an environment variable in the task. In the next task I will be executing a confluent command as a shell command.
I tried something like below, but it doesn't work - I get error
vars:
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no"
ansible_host_key_checking: false
contents: "{{ lookup('file', '/etc/kafka/info.txt') }}"
contents2: "{{ lookup('file', '/etc/kafka/info.txt').splitlines() }}"
- name: set fact
set_fact:
extract_key: "{{ contents.split('\n')[2] }}"
- name: Display output
debug: msg="{{ extract_key }}"
And then extract the key value from extract_key variable
How can I achieve this?
Thank you
The task below does the job
- set_fact:
extract_key: "{{ contents.split('\n').2.split('|').2|trim }}"
gives
extract_key: omykeyvaluecontinuousstringgoeshereandmakelong=
You can use this filter if only text lines are fixed:
- name: capturing Key
shell: echo {{ contents }} | head -3 | tail -1 | sed 's/|/\n/g' | sed -n 3p
register: extract_key
- name: Display output
debug: msg="{{ extract_key.stdout }}"
This returns omykeyvaluecontinuousstringgoeshereandmakelong=

Ansible Filter regex_search does not Exist

I have this snipped of code from our playbook:
- name: Set version
set_fact:
my_version: "My version is {{ my_file.stdout | regex_search('[0-9\.]+') }}"
... where my_file is 'program_1.2.3_install.exe'
It returns that the filter regex_search does not exist.
We are running Ansible 2.0.0.2
Does anybody know how to make this regex work?
Thanks!
Given the variable
my_file: 'program_1.2.3_install.exe'
The task
- set_fact:
my_version: "{{ my_file|regex_replace(myregex, myreplace) }}"
vars:
myregex: '^(.*?)([0-9\.]+)(.*)$'
myreplace: '\2'
- debug:
var: my_version
gives
"my_version": "1.2.3"
It's also possible to use split(). The task below gives the same result
- set_fact:
my_version: "{{ my_file.split('_').1 }}"
- debug:
var: my_version

Ansible regex_replace or regex_search

I am trying to match the "OK" from the following output with regex and store it in a varible:
System 'server.mylabserver.com'
status OK
monitoring status Monitored
monitoring mode active
on reboot start
load average [0.00] [0.01] [0.05]
cpu 0.1%us 0.1%sy 0.0%wa
memory usage 367.9 MB [20.0%]
swap usage 0 B [0.0%]
uptime 2h 10m
boot time Mon, 02 Apr 2018 06:51:01
data collected Mon, 02 Apr 2018 09:01:02
Ansible code with "regex_replace" that I've tried:
- name: Fetch the monit status
shell: "monit status | tail -n +3"
register: monit_status_raw
tags: basic_monitoring
- name: Extract monit variables
set_fact:
vmstatus: "{{ monit_status_raw | regex_replace('^\s\s([a-z]*)\s+', '\\1:')}}"
Error:
The offending line appears to be:
set_fact:
vmstatus: "{{ monit_status_raw | regex_replace('^\s\s([a-z]*)\s+', '\\1')}}"
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
Ansible code with "regex_search" that I've tried:
- name: Fetch the monit status
shell: "monit status | tail -n +3"
register: monit_status_raw
- name: Extract monit variables
set_fact:
vmstatus: "{{ monit_status_raw | regex_search('^\s\sstatus\s+(.*)$') }}"
Error:
The offending line appears to be:
set_fact:
vmstatus: "{{ monit_status_raw | regex_search('^\s\sstatus\s+(.*)$') }}"
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
Any idea what it's wrong in the regexes?
Thank you,
Dan
I think if you'd like to use regexp_search - you need to give a string and think about escaping characters, and then you need to use some construction as:
with_items
- "{{ monit_status_raw.stdout_lines }}"
But I think it will be simpler:
- name: Fetch the monit status
shell: 'monit status | tail -n +2 | grep "^\s*status" '
register: monit_status_raw
- set_fact:
vmstatus: "{{ monit_status_raw.stdout.split('status')[1]| replace(' ','')}}"
You will get vmstatus = 'Ok', if you use your sample.