Ansible, distribute a list to other list - list

I'm trying to do this with ansible:
I have multiple "fruits" and want to distrubute to multiple kids:
- vars:
kids:
- John
- Shara
- David
fruits:
- Banana
- Mango
- Orange
- Peach
- Pineapple
- Watermelon
- Avocado
- Cherries
Desidered result, something like this:
John:
- Banana
- Peach
- Avocado
Shara:
- Mango
- Pineapple
- Cherries
David:
- Orange
- Watermelon
I tried with zip, zip_longest, list, but no way.
ansible.builtin.debug:
msg: "{{ item | zip(['a','b','c','d','e','f']) | list }}"
loop:
- John
- Shara
- David

For example
- set_fact:
_dict: "{{ dict(kids|zip(_values)) }}"
vars:
_batch: "{{ fruits|length|int / kids|length|int }}"
_values: "{{ fruits|batch(_batch|float|round)|list }}"
gives
_dict:
David:
- Avocado
- Cherries
John:
- Banana
- Mango
- Orange
Shara:
- Peach
- Pineapple
- Watermelon
Q: "Have more than 4 kids"
A: For example
- hosts: localhost
gather_facts: false
vars:
kids:
- John
- Shara
- David
- Alice
- Bob
fruits:
- Banana
- Mango
- Orange
- Peach
- Pineapple
- Watermelon
- Avocado
- Cherries
- Apple
tasks:
- set_fact:
_dict: "{{ dict(kids|zip(_values)) }}"
vars:
_batch: "{{ fruits|length|int / kids|length|int }}"
_values: "{{ fruits|batch(_batch|float|round)|list }}"
- debug:
var: _dict
gives
_dict:
Alice:
- Avocado
- Cherries
Bob:
- Apple
David:
- Pineapple
- Watermelon
John:
- Banana
- Mango
Shara:
- Orange
- Peach

Related

Alternative to nesting regex replace in PostgreSQL functions?

Right now, I have a view with a mess of common, conditional string replacement and substitutions for an open text field - in this example, regional classification.
(Please ignore the accuracy of geography, I'm just working with historical standard assignments. Also, I know I could speed things up with REPLACE or even just cleaning the RegEx statements for lookback - I'm just asking about the variable/nesting here.)
CREATE OR REPLACE FUNCTION public.region_cleanup(record_region text)
RETURNS text
LANGUAGE sql
STRICT
AS $function$
SELECT REGEXP_REPLACE(
REGEXP_REPLACE(
REGEXP_REPLACE(
REGEXP_REPLACE(
REGEXP_REPLACE(
REGEXP_REPLACE(record_region,'(NORTH AMERICA\s\-\sUSA\s\-\sUSA)','USA')
,'Rest\sof\sthe\sWorld\s\-\s','')
,'NORTH\sAMERICA\s\-\sCANADA','NORTH AMERICA - Canada')
,'\&amp\;','&')
,'Georgia\s\-\sGeorgia','MIDDLE EAST - Georgia')
,'EUROPE - Turkey','MIDDLE EAST - Turkey')
A sample output using this function would look like this in my dataset, pulling out records impacted (some are already in the correct format):
record_region_input
record_region_output
NORTH AMERICA - USA - USA - NORTHEAST - Massachusetts - Boston Metro
USA - NORTHEAST - Massachusetts - Boston Metro
NORTH AMERICA - USA - USA - MIDATLANTIC - Virginia
USA - MIDATLANTIC - Virginia
Rest of the World - ASIA - Thailand
ASIA - Thailand
Rest of the World - EUROPE - Portugal
EUROPE - Portugal
Rest of the World - ASIA - China - Shanghai Metro
ASIA - China - Shanghai Metro
Georgia - Georgia
MIDDLE EAST - Georgia
This is... fine. Regex is needed since there's tons of variability on what may come before or after these strings, and I have a proper validation list elsewhere. This is just a bulk scrub of common historical naming issues.
The problem is where I get hundreds of these kind of "known substitutions" (100+) for things like company naming or cross-department standards. Having dozens and dozens of REGEXP_REPLACE( nested statements makes editing/adding/dropping anything a maddening game of counting.
I'm trying to clean data within Postgres exclusively, since my current pipeline doesn't always allow for standardization prior to upload. I know how I'd tackle this cleanly outside of pure SQL, but in a 'vanilla' PostgreSQL instance (v12+) is there a better method for transforming strings for a view?
Updated with a sample input/output table using the example function.
If when you will split a string of data into additional regions then maybe replacing regions will be easy for you. For example:
with tb as (
select 1 as id, 'NORTH AMERICA - USA - USA - NORTHEAST - Massachusetts - Boston Metro' as record_region_input
union all
select 2 as id, 'NORTH AMERICA - USA - USA - MIDATLANTIC - Virginia'
union all
select 3 as id, 'Rest of the World - ASIA - China - Shanghai Metro'
)
select * from (
select distinct tb.id, unnest(string_to_array(record_region_input, ' - ')) as region from tb
order by tb.id
) a1 where a1.region not in ('NORTH AMERICA', 'Rest of the World');
-- Result:
1 Boston Metro
1 Massachusetts
1 NORTHEAST
1 USA
2 MIDATLANTIC
2 USA
2 Virginia
3 ASIA
3 China
3 Shanghai Metro
After then, for example, for duplicating regions you can use distinct, for unnecessary regions you can use NOT in, and you can use like '%ASIA%' to get all regions which contain ASIA and etc. After all processes, you can merge the corrected string again. Example:
with tb as (
select 1 as id, 'NORTH AMERICA - USA - USA - NORTHEAST - Massachusetts - Boston Metro' as record_region_input
union all
select 2 as id, 'NORTH AMERICA - USA - USA - MIDATLANTIC - Virginia'
union all
select 3 as id, 'Rest of the World - ASIA - China - Shanghai Metro'
)
select a1.id, string_agg(a1.region, ' - ') from (
select distinct tb.id, unnest(string_to_array(record_region_input, ' - ')) as region from tb
order by tb.id
) a1 where a1.region not in ('NORTH AMERICA', 'Rest of the World')
group by a1.id
-- Return:
1 Boston Metro - Massachusetts - NORTHEAST - USA
2 MIDATLANTIC - USA - Virginia
3 ASIA - China - Shanghai Metro
This is a simple idea, maybe this idea helps you to replace regions.

Parse JSON output to get corresponding IP address for Mac address

I'm trying to get the IP address that corresponds to the MAC address 00 0C 29 DC 5B C2 from the variable in which this value is written:
"arp.stdout_lines": [
"iso.3.6.1.2.1.4.22.1.2.1.192.168.0.2 \"00 50 56 EC 7B 82 \"",
"iso.3.6.1.2.1.4.22.1.2.1.192.168.0.128 \"00 0C 29 DC 5B C2 \"",
"iso.3.6.1.2.1.4.22.1.2.1.192.168.0.254 \"00 50 56 EA F9 67 \""
]
I tried to do it the following way:
tasks:
- set_fact:
matched:
"{{ arp | regex_search( 'hi', '\\1' ) }}"
vars:
hi: "{{ iso.3.6.1.2.1.4.22.1.2.1.(.*)\\"00 50 56 EC 7B 81 \\" }}"
register: matched
But nothing works
Select the lines, split the first item on dots, and join the last four elements, e.g.
- set_fact:
matched: "{{ matched|d([]) + [item[-4:]|join('.')] }}"
loop: "{{ arp.stdout_lines|select('search', _mac)|
map('split')|map('first')|
map('split', '.')|list }}"
vars:
_mac: 00 0C 29 DC 5B C2
gives
matched:
- 192.168.0.128
A systemic approach would be using ansible.netcommon.cli_parse and create a library of templates.
Considering that you want network related filtering, a slightly different approach would be using ansible_facts, to get the IP address corresponding to a given Mac address.
- set_fact:
matched: "{{ ansible_facts[item]['ipv4']['address'] }}"
loop: "{{ ansible_interfaces }}"
when:
- ansible_facts[item]['macaddress'] is defined
- ansible_facts[item]['macaddress'] == "00:0c:29:dc:5b:c2"
Note that this requires gather_facts: true or setup module to be run.

Prometheus: the "for" is breaking my test

I've this alert which I try to cover by unit tests:
- alert: Alert name
annotations:
summary: 'Summary.'
book: "https://link.com"
expr: sum(increase(app_receiver{app="app_name", exception="exception"}[1m])) > 0
for: 5m
labels:
severity: 'critical'
team: 'myteam'
This test scenario is failing each time until the for: 5m will be commented in the code. In this case, it'll be successful.
rule_files:
- ./../../rules/test.yml
evaluation_interval: 1m
tests:
- interval: 1m
input_series:
- series: 'app_receiver{app="app_name", exception="exception"}'
values: '0 0 0 0 0 0 0 0 0 0'
- series: 'app_receiver{app="app_name", exception="exception"}'
values: '0 0 0 0 0 10 20 40 60 80'
alert_rule_test:
- alertname: Alert name
eval_time: 5m
exp_alerts:
- exp_labels:
severity: 'critical'
team: 'myteam'
exp_annotations:
summary: 'Summary.'
book: "https://link.com"
The result of this test:
FAILED:
alertname:Alert name, time:5m,
exp:"[Labels:{alertname=\"Alert name\", severity=\"critical\", team=\"myteam\"} Annotations:{book=\"https://link.com\", summary=\"Summary.\"}]",
got:"[]
Can someone please help me fix this test and explain a failure reason?

How to pick which column to unstack a dataframe on

I have a data set that looks like:
UniqueID CategoryType Value
A Cat1 apple
A Cat2 banana
B Cat1 orange
C Cat2 news
D Cat1 orange
D Cat2 blue
I'd like it to look like:
UniqueID Cat1 Cat2
A apple banana
B orange
C news
D orange blue
I've tried using unstack, but can't get the right index set or something.
Thanks
The bulk of the work is done with
df.set_index(['UniqueID', 'CategoryType']).Value.unstack(fill_value='')
CategoryType Cat1 Cat2
UniqueID
A apple banana
B orange
C news
D orange blue
We can get the rest of the formatting with
df.set_index(['UniqueID', 'CategoryType']).Value.unstack(fill_value='') \
.rename_axis(None, 1).reset_index()
UniqueID Cat1 Cat2
0 A apple banana
1 B orange
2 C news
3 D orange blue
You can use pivot
Edit: With some more edit and inspiration from #piRsquared's answer,
df.pivot('UniqueID', 'CategoryType', 'Value').replace({None: ''}).rename_axis(None, 1).reset_index()
UniqueID Cat1 Cat2
0 A apple banana
1 B orange
2 C news
3 D orange blue
You can use pivot_table with fill_value
df.pivot_table(index='UniqueID', columns='CategoryType', values='Value',
aggfunc='sum', fill_value='')
CategoryType Cat1 Cat2
UniqueID
A apple banana
B orange
C news
D orange blue
pivot works just fine:
df = df.pivot(index = "UniqueID", columns = "CategoryType", values = "Value")
Take Me so long time to think outside the box :)
index = df.UniqueID.unique()
columns = df.CategoryType.unique()
df1= pd.DataFrame(index=index, columns=columns)
df['match']=df.UniqueID.astype(str)+df.CategoryType
A=dict( zip( df.match, df.Value))
df1.apply(lambda x : (x.index+x.name)).applymap(A.get).replace({None:''})
Out[406]:
Cat1 Cat2
A apple banana
B orange
C news
D orange blue

Set variable with new line and tab

Formatting issue that is getting passed to document. Sample companyList below which varies each time script is run
companyList = ["Apple - Seattle Washington (800) 555-5555", "Microsoft - Tampa Florida (800) 555-1234", "Samsung - Tokyo Japan (01) 555 123-1234"]
Right now the line of code to format this text is:
companyInfo = "\n\n".join(companyList)
and companyInfo outputs like this:
Apple - Seattle Washington (800) 555-5555 Microsoft - Tampa Florida (800) 555-1234 Samsung - Tokyo Japan (01) 555 123-1234
How can I rewrite this to format like this (note tabbed one over per new line):
Apple - Seattle Washington (800) 555-5555
Microsoft - Tampa Florida (800) 555-1234
Samsung - Tokyo Japan (01) 555 123-1234
Many thanks in advance
You can do it like this:
companyInfo = "\n\n".join("\t%s" % x for x in companyList)