I'm using the replace module of Ansible (http://docs.ansible.com/ansible/replace_module.html).
My file is:
...
net route-domain /Common/0 {
id 0
vlans {
/thisrow/AAAA_yyyyy
/Common/http-tunnel
/Common/socks-tunnel
/Common/BIGIP-HA
/thisrow/AAAA_xxxxx
}
}
...
I need to remove all rows containing /thisrow/ inside vlans.
I'm using this regex: (^ vlans )(?P<vlanrow>){([^}]*)}{0}.*vasgk.*\n but I don't know how to remove ALL thisrow from vlanrow group
Thanks,
Riccardo
This is not a dupl. Ansible is not the problem. The problem is the regular expression matching just 1 time thisrow. Try it on https://regex101.com/r/n3rRsl/1
I've came up with the following playbook, using a little modified regexp of yours and the sample data from regex101 you provided.
playbook.yml
- hosts: localhost
tasks:
- replace:
dest: /home/user/config.conf
regexp: '(^ vlans )(?P<vlanrow>){([^}]*)}{0}(\s{8}/vasgk.*)\n'
replace: '\1\2{\3'
register: result
until: result.changed == False
retries: 4094 # you can't have more vlans!
This is the result:
net route-domain /Common/0 {
id 0
vlans {
/Common/http-tunnel
/Common/socks-tunnel
/Common/BIGIP-HA
}
}
It seems to be quite slow though, but should give you an idea. Hope that helps!
Edit:
changed
(^ vlans )(?P<vlanrow>){([^}]*)}{0}(.*/vasgk.*)\n
to (^ vlans )(?P<vlanrow>){([^}]*)}{0}(\s{8}/vasgk.*)\n, this fixed problems with spacing.
Related
Here is the output that I'm trying to parse:
hostname#show bgp vrf vrfname summary | i 1.1
BGP Route Distinguisher: 1.1.1.1:0
BGP router identifier 1.1.1.1, local AS number 2222
1.1.1.3 0 64512 349608 316062 896772 0 0 2w4d 1
I have the following regex that succesfully matches just the last line. Now I need to split that line and view the last index. In this case it is "1", but I will want to fail if that value is "0".
- name: debug test
debug:
msg: "{{show_bgp_sessions.data | regex_findall('\\d+\\.\\d+\\.\\d+\\.\\d+\\s\\s.*')}}"
I tried adding a split in a couple different formats at the end of the "msg" line so that I can grab the last index to compare it in the failed_when statement:
msg: "{{show_bgp_sessions.data | regex_findall('\\d+\\.\\d+\\.\\d+\\.\\d+\\s\\s.*') | split(' ')}}"
But I'm getting the following error msg:
"template error while templating string: no filter named 'split'. String:
I've also tried to use a few different forms of "ends_with" to verify the last index in the string as I've used that a lot in my python experience, but I can't get it to work in ansible.
I can't create a new task to parse the data and perform the split seperately because I need to run this verification through a loop.
When you select the line, reverse the string, and split the first item. For example
msg: "{{ (my_line|reverse).split()|first }}"
Possibly the regex provided by #Thefourthbird is a better solution.
But for your issue at hand, this is caused by the fact that there is indeed no filter split in Jinja, see the list there: https://jinja.palletsprojects.com/en/2.11.x/templates/#list-of-builtin-filters.
The reason why there is no such a filter is simple: split() is a function of the Python String, and since Jinja is Python, you can just use it as is.
Also mind that, since regex_findall is meant for multiple matches, you'll have to select the first element of the list, for example, with the filter first.
So your message ends up being:
msg: >-
{{
(
show_bgp_sessions.data
| regex_findall('\\d+\\.\\d+\\.\\d+\\.\\d+\\s\\s.*')
| first
).split()
}}
Given the playbook:
- hosts: all
gather_facts: no
vars:
show_bgp_sessions:
data: |
hostname#show bgp vrf vrfname summary | i 1.1
BGP Route Distinguisher: 1.1.1.1:0
BGP router identifier 1.1.1.1, local AS number 2222
1.1.1.3 0 64512 349608 316062 896772 0 0 2w4d 1
tasks:
- debug:
msg: >-
{{
(
show_bgp_sessions.data
| regex_findall('\\d+\\.\\d+\\.\\d+\\.\\d+\\s\\s.*')
| first
).split()
}}
Gives the recap:
TASK [debug] ***************************************************************
ok: [localhost] => {
"msg": [
"1.1.1.3",
"0",
"64512",
"349608",
"316062",
"896772",
"0",
"0",
"2w4d",
"1"
]
}
I create this regex to parse mongodb url as follow:
/mongodb://((?'username'\w+):(?'password'\w+)#)?(?'hosts'\w[,\w]*)(/(?'defaultdb'[\w.]+))?(\?(?'options'.*$))?$/m
I do some tests in regex101 with it, and I wanna to know if its possible to parse the ',' (commas) in hosts group to result in an array, and similarly do this in options group with '&' separator.
My intentions is iterate by the regex result and use the matches groups with your result in one way, without need to split by separator.
Expected example:
mongodb://user:password#host,host2,host3,host4/databasename?options=1&options=2
group user: user
group password: password
group hosts: host
group hosts: host2
group hosts: host3
group hosts: host4
group defaultdb: databasename
group options: options=1
group options: options=2
A possible work around to have all your data in the right order:
let str = 'mongodb://user:password#host,host2,host3,host4/databasename?options=1&options=2'
// substring(10) to avoid 'mongodb://'
console.log(str.substring(10).split(/[:#,/&?]/))
Edit: I see before your edit that you are on Node, so an other solution is:
let str = 'mongodb://user:password#host,host2,host3,host4/databasename?options=1&options=2'
let regex = /mongodb:\/\/(?<username>\w+):(?<password>\w+)#(?<hosts>[,\w]*)\/(?<defaultdb>[\w\.]+)?\?(?<options>.*$)?$/
function splitGroup(group, items)
{
items.forEach(function (item, index) {
res.groups[group+'_'+index] = item
});
}
res = regex.exec(str)
res.groups.hosts = res.groups.hosts.split(',')
res.groups.options = res.groups.options.split('&')
splitGroup('host', res.groups.hosts)
splitGroup('option', res.groups.options)
delete res.groups.hosts
delete res.groups.options
console.log(Object.keys(res.groups).filter(v => v.startsWith('host')))
// [ 'host_0', 'host_1', 'host_2', 'host_3' ]
console.log(Object.keys(res.groups).filter(v => v.startsWith('option')))
// [ 'option_0', 'option_1' ]
I am using Grafana Dashboard. I have the following servers:
ip-10-2-32-214.ec2.internal
ip-10-2-33-184.ec2.internal
ip-10-2-34-13.ec2.internal
ip-10-2-34-213.ec2.internal
ip-10-2-36-165.ec2.internal
ip-10-2-36-219.ec2.internal
ip-10-2-36-77.ec2.internal
ip-10-2-37-79.ec2.internal
ip-10-2-38-252.ec2.internal
ip-10-2-39-216.ec2.internal
ip-10-2-40-242.ec2.internal
ip-10-2-40-52.ec2.internal
ip-10-2-43-220.ec2.internal
ip-10-2-44-192.ec2.internal
ip-10-2-45-148.ec2.internal
ip-10-2-46-215.ec2.internal
ip-10-2-47-152.ec2.internal
ip-10-2-48-91.ec2.internal
ip-10-2-49-237.ec2.internal
ip-10-2-50-200.ec2.internal
ip-10-2-52-49.ec2.internal
ip-10-2-53-14.ec2.internal
ip-10-2-56-137.ec2.internal
ip-10-2-57-108.ec2.internal
ip-10-2-60-105.ec2.internal
ip-10-2-61-250.ec2.internal
ip-10-2-63-177.ec2.internal
But I want to match only server that end with those numbers:
184|200|165|220|237|137|242|(.ec2.internal)
As u see I tried this regex but it not working.
Worked for me 184|200|165|220|237|137|242
I'm trying to use ansible to update telegraf.conf's [[inputs.ping]].
telegraf.conf looks like the following:
[[inputs.ping]]
urls = ["tac-temp1","tac-temp2", "tac-temp3","tac-temp4"] #tac
count = 30
timeout = 15.0
[inputs.ping.tags]
name = "tac"
[[inputs.ping]]
urls = ["prod-temp1","prod-temp2", "prod-temp3","prod-temp4"] #prod
count = 30
timeout = 15.0
[inputs.ping.tags]
name = "prod"
[[inputs.ping]]
urls = ["test-temp1","test-temp2", "test-temp3","test-temp4"] #test
count = 30
timeout = 15.0
[inputs.ping.tags]
name = "test"
I'm trying to add ,"tac-temp10" after ,"tac-temp4" in line 2 shown above.
- hosts: Servers
become: yes
become_method: sudo
tasks:
- name: Loading telegraf.conf content for search
shell: cat /tmp/telegraf.conf
register: tele_lookup
- name: Adding Server to /tmp/telegraf.conf if does not exists
lineinfile:
path: /tmp/telegraf.conf
state: present
regexp: '^((.*)"] #tac$)'
line: ',"tac-temp10"'
backup: yes
when: tele_lookup.stdout.find('tac-temp10') != '0'
regexp: '^((.*)"] #tac$)' is replacing the whole line with ,"tac-temp10". Expected output:
[[inputs.ping]]
urls = ["tac-temp1","tac-temp2", "tac-temp3","tac-temp4","tac-temp10"] #tac
count = 30
timeout = 15.0
[inputs.ping.tags]
name = "tac"
Warning: Ugly regexp ahead. Beware of unpredictable understanding for next guys (including you after time passed by...) doing maintenance.
The following will add your server at the end of the list if it is not already present (anywhere in the list) with a single idempotent task.
- name: add our server if needed
lineinfile:
path: /tmp/test.conf
backup: yes
state: present
regexp: '^( *urls *= *\[)(("(?!tac-temp10)([a-zA-Z0-9_-]*)",? *)*)(\] #tac)$'
backrefs: yes
line: '\1\2, "tac-temp10"\5'
You need to use backreferences to put back on the line the already matched parts of the expression. I used backup: yes so I could easily come back to the original for my tests. Feel free to drop it.
As you can see (and as advised in my warning) this is pretty much impossible to understand for anyone having to quickly read the code. If you have to do anything more fancy/complicated, consider using a template and storing your server list in a variable somewhere.
I have set up Curator to delete old Elasticsearch indexes via this filter:
(...)
filters:
- filtertype: pattern
kind: regex
value: '^xyz-us-(prod|preprod)-(.*)-'
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 7
exclude:
(...)
However, I realized that Curator uses non-greedy regexes, because this filter catches the index xyz-us-prod-foo-2018.10.11 but not xyz-us-prod-foo-bar-2018.10.11.
How can I modify the filter to catch both indexes?
The answer I gave at https://discuss.elastic.co/t/use-greedy-regexes-in-curator-filter/154200 is still good, though you somehow weren't able to get the results I posted there. Anchoring the end and specifying the date regex worked for me: '^xyz-us-(prod|preprod)-.*-\d{4}\.\d{2}\.\d{2}$'
I created these indices:
PUT xyz-us-prod-foo-2018.10.11
PUT xyz-us-prod-foo-bar-2018.10.11
PUT xyz-us-preprod-foo-2018.10.12
PUT xyz-us-preprod-foo-bar-2018.10.12
And ran with this config:
---
actions:
1:
action: delete_indices
filters:
- filtertype: pattern
kind: regex
value: '^xyz-us-(prod|preprod)-.*-\d{4}\.\d{2}\.\d{2}$'
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 7
The results are fully matched:
2018-10-29 20:08:28,120 INFO curator.utils show_dry_run:928 DRY-RUN: delete_indices: xyz-us-preprod-foo-2018.10.12 with arguments: {}
2018-10-29 20:08:28,120 INFO curator.utils show_dry_run:928 DRY-RUN: delete_indices: xyz-us-preprod-foo-bar-2018.10.12 with arguments: {}
2018-10-29 20:08:28,120 INFO curator.utils show_dry_run:928 DRY-RUN: delete_indices: xyz-us-prod-foo-2018.10.11 with arguments: {}
2018-10-29 20:08:28,120 INFO curator.utils show_dry_run:928 DRY-RUN: delete_indices: xyz-us-prod-foo-bar-2018.10.11 with arguments: {}
Curator's implementation of the Regex engine is using the U (Ungreedy) flag.
Ungreedy regexes make star quantifiers lazy by default, adding a "?" modifier under the Ungreedy option would turn it back to Greedy.
Try adding a '?' after the '.*' in your regex
'^xyz-us-(prod|preprod)-(.*?)-'