Note: i m seeking solution for Ansible. Here is the issue description:
I have a file filedet.yml as below, however realtime this yaml may contain many more IP and file details.
---
10.9.9.111:
/tmp/test.jar:
hash: e6df90d38fa86f0e289f73d79cd2cfd2a29954eb
/tmp/best.jar:
hash: e6df90d38fa86f0e289f73d79cd2cfd2a29954eb
10.8.8.44:
/tmp/conf/extra/httpd-ssl.conf:
hash: 1746f03d57491b27158b0d3a48fca8b5fa85c0c2
/tmp/conf/httpd.conf:
hash: 1746f03d57491b27158b0d3a48fca8b5fa85c0c2
I wish to extract a particular IP and the file details so that it can be removed from the yaml using state: absent attribute . Thus, the desired regex should return the below:
10.9.9.111:
/tmp/test.jar:
hash: e6df90d38fa86f0e289f73d79cd2cfd2a29954e
/tmp/best.jar:
hash: e6df90d38fa86f0e289f73d79cd2cfd2a29954eb
I decided to have the start pattern as '10.9.9.111' and search until there are no spaces or newlines which means until it gets to the next IP.
I prepared the below regex and it shows correct, desired FULL Text match on http://regex101.com. See snapshot.
Regex query below:
[^#](^10.9.9.111:)(.|\n)*^(?!( |\n))
The same regex works fine with grep -Pzo and returns the desired string. However, the regex fails to work with ansible's lineinfile module as it does not yeild any results.
i want this regex or any other solution to work with Ansible so i can remove the given IP and it's file details from the yaml
Ansible:
- name: "Remove entry from file."
lineinfile:
path: "/app/filedet.yaml"
regexp: "[^#](^10.9.9.111:)(.|\n)*^(?!( |\n))"
state: absent
Can you please suggest what is the issue here ?
Q: "I wish to extract an IP and the file details."
A: Use include_vars. For example
- include_vars:
file: filedet.yml
name: my_dict
- debug:
msg: "{{ my_dict['10.9.9.111'] }}"
give
"msg": {
"/tmp/best.jar": {
"hash": "e6df90d38fa86f0e289f73d79cd2cfd2a29954eb"
},
"/tmp/test.jar": {
"hash": "e6df90d38fa86f0e289f73d79cd2cfd2a29954eb"
}
}
Q: "Remove an entry from the file."
A: Use template. For example
$ cat filedet.yml.j2
{% for item in my_dict_keys %}
{{ item }}:
{{ my_dict[item]|to_nice_yaml|indent(2) }}
{% endfor %}
The task below
- set_fact:
my_dict_keys:
- "10.8.8.44"
- template:
src: filedet.yml.j2
dest: filedet.yml
gives
$ cat filedet.yml
10.8.8.44:
/tmp/conf/extra/httpd-ssl.conf:
hash: 1746f03d57491b27158b0d3a48fca8b5fa85c0c2
/tmp/conf/httpd.conf:
hash: 1746f03d57491b27158b0d3a48fca8b5fa85c0c2
Notes:
It's a bad idea to use lineinfile for this purpose
Data in the question is not valid YAML. The key is repeating
10.9.9.111:
/tmp/test.jar:
hash: e6df90d38fa86f0e289f73d79cd2cfd2a29954eb
/tmp/test.jar:
hash: e6df90d38fa86f0e289f73d79cd2cfd2a29954eb
Related
I need to determine if the gcloud user is logged in as themself. I know I can use gcloud auth list to see which is indicated. The problem is getting Ansible to understand which of the list is selected. If its the system, skip the test. If its the human run the test.
I'm just not sure how ansible can register the return from:
gcloud auth list | grep '*'
I am able to actually find the user information but what I can't seem to get to work is the when:
My apologies if I wasn't clear.
What I have now (and not working)
- name: Verify user is Human
ansible.builtin.debug:
msg: Authorized user is "{{ auth_member }}"
when: not "{{ auth_member }}" | regex_search('.*s6.*')
keeps returning variations of "The offending line appears to be:
msg: Authorized user is "{{ auth_member }}"
when: not "{{ auth_member }}" | regex_search('.s6.')ß
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. "
I have tried various ways to do some sort of "not in" but keep getting the quotes error
(super newbie at ansible)
I'm using an Ansible template to include variables in a SQL import file, and everything was working fine until lately where ansible is now parsing the quotes around variables in the template and eliminating some of them, where it never used to.
For example, the following...
INSERT INTO `my_table` VALUES (1, '{{ var.one }}', NULL, '{{ var.two }}');
...is now resulting in...
INSERT INTO `my_table` VALUES (1, 'value-one, NULL, value-two');
Notice that the two single quotes in the middle have been removed. This breaks the SQL. As per the MySQL/ANSI standards, I'm supposed to be using single quotes for string literals, and backticks only for identifiers. So what's the solution here? And does anyone know when this behaviour changed?
I'm using Ansible 2.9.27
As requested, here is the simple template task call...
- name: Copy SQL Dynamic Data Dump File for import
template: src=data.sql.j2 dest=/tmp/data.sql
This issue has already been solved. The example below works for me both in 2.10 and 2.12
ansible [core 2.12.1], python 3.8.5, jinja 3.0.1
ansible 2.10.11, python 3.6.9, python3-jinja2 2.10-1ubuntu0.18.04.1
shell> cat import.sql.j2
INSERT INTO my_table VALUES (1, '{{ var.one }}', NULL, '{{ var.two }}');
and the task
- template:
src: import.sql.j2
dest: import.sql
vars:
var:
one: value-one
two: value-two
give valid SQL
shell> cat import.sql
INSERT INTO my_table VALUES (1, 'value-one', NULL, 'value-two');
As a workaround put the single quote into the expression
shell> cat import.sql.j2
INSERT INTO my_table VALUES (1, {{ "'" }}{{ var.one }}{{ "'" }}, NULL, {{ "'" }}{{ var.two }}{{ "'" }});
Use sed to make these changes globally, e.g.
shell> cat templates/import.sql.j2
INSERT INTO my_table VALUES (1, '{{ var.one }}', NULL, '{{ var.two }}');
shell> for i in templates/*; do sed -i "s/'/{{ \"'\" }}/g" $i; done
shell> cat templates/import.sql.j2
INSERT INTO my_table VALUES (1, {{ "'" }}{{ var.one }}{{ "'" }}, NULL, {{ "'" }}{{ var.two }}{{ "'" }});
Optionally choose a linter and validate SQL. The validate option in the template module doesn't work for sqlfluff hence the next command task is used below. The testing of the line length is excluded (--exclude-rules L016).
- template:
src: import.sql.j2
dest: import.sql
register: result
vars:
var:
one: value-one
two: value-two
- command: "sqlfluff lint {{ result.dest }} --exclude-rules L016"
changed_when: false
So apparently this is related to a known bug in Jinja2 that was fixed in 2.11. I'm using 3.0.3 though so it's odd that its still a problem, so I launched a bug report with Ansible and they've confirmed the behaviour. The way Jinja2 Native is used in templating has changed in Ansible 2.11 which should alieviate the problem, however until it's available in the release channel for our distro, we've been able to fix the problem by disabling "native" support in Jinja2 which we were using. This may cause other problems, but we're stuck since Ansible 2.9 is only receiving security patches now.
My Bug report is here: https://github.com/ansible/ansible/issues/76761
The Jinja2 Bug report is here: https://github.com/pallets/jinja/issues/1020
How can I replace only the last character of .jar. Below is the sample value for LIST_jar[0].nameOfJar and the sample output expected
my-firsrt.jar-1.jar >> my-firsrt.jar-1
my-firsrt-2.0.jar-1.jar >> my-firsrt-2.0.jar-1
my-firsrt-1.0-jar-1.jar >> my-firsrt-1.0-jar-1
my-firsrt-jar-1.0-jar-1.jar >> my-firsrt-jar-1.0-jar-1
my-firsrt-jar-1.0.jar.jar >> my-firsrt-jar-1.0.jar
my-firsrt-jar-1.0-jar.jar >> my-firsrt-jar-1.0.jar
This is my sample code but it is not working accordingly as it is replacing all the jar value.
- name: Replace string
copy:
content: "{ name: jack }"
dest: "{{ directory }}/JAR_LIST/{{ LIST_jar[0].nameOfJar | regex_replace('.jar') }}.log"
There are more options. The most simple is splitext. More versatile is split to manipulate multiple extensions. The filter regex_replace is more complex but universal.
splitext
Use splitext. For example
- debug:
msg: "dest: {{ (item|splitext).0 }}.log"
loop: "{{ list_jar }}"
gives (given the list of filenames is in the variable list_jar)
msg: 'dest: my-firsrt.jar-1.log'
msg: 'dest: my-firsrt-2.0.jar-1.log'
msg: 'dest: my-firsrt-1.0-jar-1.log'
msg: 'dest: my-firsrt-jar-1.0-jar-1.log'
msg: 'dest: my-firsrt-jar-1.0.jar.log'
msg: 'dest: my-firsrt-jar-1.0-jar.log'
split
The next option is split. The task below gives the same results
- debug:
msg: "dest: {{ item.split('.')[:-1]|join() }}.log"
loop: "{{ list_jar }}"
regex_replace
If you want to use regex_replace the tasks below give the same results. Either remove the extension and concatenate .log
- debug:
msg: "dest: {{ item|regex_replace('^(.*)\\.jar$', '\\1') }}.log"
loop: "{{ list_jar }}"
, or replace the extension in the filter
- debug:
msg: "dest: {{ item|regex_replace('^(.*)\\.jar$', '\\1.log') }}"
loop: "{{ list_jar }}"
To make the code more readable it's a good idea to put the regular expressions into the variables and use single-quoted style. For example
- debug:
msg: "dest: {{ item|regex_replace(my_regex, my_replace) }}"
loop: "{{ list_jar }}"
vars:
my_regex: '^(.*)\.jar$'
my_replace: '\1.log'
Below is my sample code but it is not working accordingly as it is replacing all the jar value.
Correct, because the . in regex means "any character." If you just want to replace the literal string .jar then don't use regex replace, just use replace() which will use the literal characters you provide to it.
Otherwise, if you wish to continue to use regex_replace, then quote that character and also specify that the pattern should only match the end of the string
- debug:
msg: "{{ nameOfJar | regex_replace('[.]jar$', '') }}.log"
vars:
nameOfJar: my-firsrt-jar-1.0-jar-1.jar
We're going to assume the last two of your samples were just copy paste in this question, and that you prefer the .log outside the mustaches instead of just including '.log' in the replacement position of regex_replace
I'm currently writing a small Ansible playbook whose job is to put in an additional domain in the search list in /etc/resolv.conf.
The second domain to add to the search list must contain part of the hostname of the target hosts. I'm getting the hostname of each of the target hosts during playbook execution using the magic variable {{ inventory_hostname }}.
I then need to extract characters 4 - 6 from the {{ inventory_hostname }} (say 'xyz') such that the second domain to add to the search list is xyz.foo.bar. In bash, this would be obtained with something like:
SERVER=$('hostname':3:3)
env=${SERVER:3:3}
... and the variable 'env' would be equal to 'xyz'.
The playbook works as long as 'xyz' is manually defined.
I am aware that Ansible has regular expression filters which can help with something like this, however I could not figure out a regular expression which does what I need.
For completeness sake, I have tried something like this in ansible:
{{ inventory_hostname|3:3 }}
Any help would be greatly appreciated.
It's almost the same, you can use "{{ inventory_hostname[3:6] }}" to select characters 3 to 6.
For example this task
- debug:
msg: "{{ inventory_hostname[3:6] }}"
Will output
ok: [localhost] => {
"msg": "alh"
}
I have a string with IP addr: 192.168.10.2
I want to extract first three octets of the IP in Ansible and I tried to use this regex.
{{comp_ip | regex_replace("^[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}"), "//1"}}
This does not yield any result. Can someone correct me where I went wrong?
If already have dot-separated IP-address, there is a simple way:
{{ comp_ip.split('.')[0:3] | join('.') }}
You are doing it right, you just have to use parenthesis in Regex to make a group. It is better to match the whole ip and end your regex with $, and also change //1 to \\1 in your code.
Change regex from:
^[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}
To this regex:
^([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})\.[0-9]{1,3}$
This is the full code:
{{comp_ip | regex_replace('^([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})\.[0-9]{1,3}$', '\\1')}}
In case you want to calculate your network address you can use Ansible ipaddr filter which provides exactly this functionality:
http://docs.ansible.com/ansible/latest/playbooks_filters_ipaddr.html
---
- hosts: localhost
vars:
my_ip: "{{ ansible_default_ipv4.network }}/{{ ansible_default_ipv4.netmask }}"
tasks:
- debug: msg="network {{ my_ip | ipaddr('network') }}"
- debug: msg="netmask {{ my_ip | ipaddr('netmask') }}"