ansible - need output in csv in multiple columns - amazon-web-services

I have playbook as below:
tasks:
- name: To Get list of all ACM
aws_acm_info:
region: "{{ region }}"
register: acm
- name: Cname Names
set_fact:
cname: "{{ acm | json_query(jmesquery) }}"
vars:
jmesquery: 'certificates[*].domain_validation_options[].resource_record.name'
- name: Cname Values
set_fact:
value: "{{ acm | json_query(jmesquery) }}"
vars:
jmesquery: 'certificates[*].domain_validation_options[].resource_record.value'
- name: set file header
shell: echo 'cname, value'> {{ path }}
run_once: true
- name: CSV - Write information into .csv file
lineinfile:
insertafter: ','
dest: "{{ path }}"
line: "{{ item }}"
with_items:
- "{{ cname }}"
- "{{ value }}"
I am getting output in single column as cname, but I need values in 2nd column as value.
required output format world be
enter image description here
I really appreciate any help you can provide.

Given the data below for testing
acm:
certificates:
- domain_validation_options:
- resource_record: {name: aaa, value: 1}
- domain_validation_options:
- resource_record: {name: bbb, value: 2}
- domain_validation_options:
- resource_record: {name: ccc, value: 3}
Get the name/value pairs in a single query
_query: 'certificates[*].domain_validation_options[].[resource_record.name,
resource_record.value]'
csv: "{{ acm|json_query(_query)|map('join', ',')|list }}"
give
csv:
- aaa,1
- bbb,2
- ccc,3
You can write the lines into a file
- lineinfile:
dest: /tmp/cname.csv
line: "{{ item }}"
loop: "{{ csv }}"
gives
shell> cat /tmp/cname.csv
aaa,1
bbb,2
ccc,3
Example of a complete playbook
- hosts: localhost
vars:
acm:
certificates:
- domain_validation_options:
- resource_record: {name: aaa, value: 1}
- domain_validation_options:
- resource_record: {name: bbb, value: 2}
- domain_validation_options:
- resource_record: {name: ccc, value: 3}
_query: 'certificates[*].domain_validation_options[].[resource_record.name,
resource_record.value]'
csv: "{{ acm|json_query(_query)|map('join', ',')|list }}"
tasks:
- lineinfile:
dest: /tmp/cname.csv
line: "{{ item }}"
loop: "{{ csv }}"
Read the file
- hosts: localhost
tasks:
- community.general.read_csv:
path: /tmp/cname.csv
fieldnames: cname,value
delimiter: ','
register: cname
- debug:
var: cname.list
gives
cname.list:
- {cname: aaa, value: '1'}
- {cname: bbb, value: '2'}
- {cname: ccc, value: '3'}
You can see that the values are strings now. If you want to keep values as integers store the data in YAML format
- hosts: localhost
vars:
acm:
certificates:
- domain_validation_options:
- resource_record: {name: aaa, value: 1}
- domain_validation_options:
- resource_record: {name: bbb, value: 2}
- domain_validation_options:
- resource_record: {name: ccc, value: 3}
_query: 'certificates[*].domain_validation_options[].[resource_record.name,
resource_record.value]'
csv: "{{ acm|json_query(_query)|map('join', ': ')|list }}"
tasks:
- lineinfile:
dest: /tmp/cname.yml
line: "{{ item }}"
loop: "{{ csv }}"
gives
shell> cat /tmp/cname.yml
aaa: 1
bbb: 2
ccc: 3
Read the file
- hosts: localhost
tasks:
- include_vars:
file: /tmp/cname.yml
name: cname_dict
- debug:
var: cname_dict
gives
cname_dict:
aaa: 1
bbb: 2
ccc: 3

Related

Ansible regexp that will change only found string instead of whole line

I'm trying to setup playbook that will setup some docker services. I'm trying to pass some variables that are obtained by vars_prompt to docker-compose file.
replace:
path: files/docker-compose.yaml
regexp: 'SERVER_IP'
replace: '{{ ip_address }}'
Destination file
environment:
(...)
SERVER_IP: 'SERVICE_IP_ADDR'
(...)
Right now such task replace whole line with ip_address variable
expected result
environment:
(...)
SERVER_IP: ip_address
(...)
Instead of the modules replace or lineinfile, a more robust solution would be updating the dictionary. For example, given the file
shell> cat files/docker-compose.yaml
environment:
k1: v1
SERVER_IP: 'SERVICE_IP_ADDR'
k3: v3
and the below variables
dc_file: "{{ playbook_dir }}/files/docker-compose.yaml"
ip_address: 10.1.0.10
declare the dictionary dc_update
dc_update:
environment:
SERVER_IP: '{{ ip_address }}'
Include the content of the file into the dictionary dc
- include_vars:
file: "{{ dc_file }}"
name: dc
and declare the below variable
docker_compose: "{{ dc|combine(dc_update, recursive=true) }}"
This gives the updated configuration
docker_compose:
environment:
SERVER_IP: 10.1.0.10
k1: v1
k3: v3
Writes the updated configuration into the file
- copy:
dest: "{{ dc_file }}"
content: |
{{ docker_compose|to_nice_yaml(indent=2) }}
Running the play with the --diff options gives
TASK [copy] *******************************************************************
--- before: /export/scratch/tmp7/test-176/files/docker-compose.yaml
+++ after: /home/admin/.ansible/tmp/ansible-local-667065tpus_pfk/tmpnkohmmiz
## -1,4 +1,4 ##
environment:
+ SERVER_IP: 10.1.0.10
k1: v1
- SERVER_IP: 'SERVICE_IP_ADDR'
k3: v3
changed: [localhost]
shell> cat files/docker-compose.yaml
environment:
SERVER_IP: 10.1.0.10
k1: v1
k3: v3
Notes:
Example of a complete playbook for testing
- hosts: localhost
vars:
dc_file: "{{ playbook_dir }}/files/docker-compose.yaml"
ip_address: 10.1.0.10
dc_update:
environment:
SERVER_IP: '{{ ip_address }}'
docker_compose: "{{ dc|combine(dc_update, recursive=true) }}"
tasks:
- include_vars:
file: "{{ dc_file }}"
name: dc
- debug:
var: docker_compose
- copy:
dest: "{{ dc_file }}"
content: |
{{ docker_compose|to_nice_yaml(indent=2) }}
You can use modules replace and lineinfile. See the examples below on how to get the expected result by matching the key SERVER_IP, the value SERVICE_IP_ADDR, or both. All options below give the same result
--- before: /export/scratch/tmp7/test-176/files/docker-compose.yaml (content)
+++ after: /export/scratch/tmp7/test-176/files/docker-compose.yaml (content)
## -1,4 +1,4 ##
environment:
k1: v1
- SERVER_IP: 'SERVICE_IP_ADDR'
+ SERVER_IP: '10.1.0.10'
k3: v3
changed: [localhost]
- hosts: localhost
vars:
dc_file: "{{ playbook_dir }}/files/docker-compose.yaml"
ip_address: 10.1.0.10
tasks:
- replace:
path: "{{ dc_file }}"
regexp: SERVICE_IP_ADDR
replace: "{{ ip_address }}"
when: replace_by_value|d(false)|bool
- replace:
path: "{{ dc_file }}"
regexp: "SERVER_IP:.*"
replace: "SERVER_IP: '{{ ip_address }}'"
when: replace_by_key|d(false)|bool
- replace:
path: "{{ dc_file }}"
regexp: "SERVER_IP: \\'SERVICE_IP_ADDR\\'"
replace: "SERVER_IP: '{{ ip_address }}'"
when: replace_by_kv|d(false)|bool
- lineinfile:
backrefs: true
path: "{{ dc_file }}"
regexp: "^(.*)\\'SERVICE_IP_ADDR\\'$"
line: "\\1'{{ ip_address }}'"
when: lineinfile_by_value|d(false)|bool
- lineinfile:
backrefs: true
path: "{{ dc_file }}"
regexp: "^(\\s*)SERVER_IP:.*$"
line: "\\1SERVER_IP: '{{ ip_address }}'"
when: lineinfile_by_key|d(false)|bool
- lineinfile:
backrefs: true
path: "{{ dc_file }}"
regexp: "^(\\s*)SERVER_IP: \\'SERVICE_IP_ADDR\\'$"
line: "\\1SERVER_IP: '{{ ip_address }}'"
when: lineinfile_by_kv|d(false)|bool
After some test I've found solution for that.
replace:
path: files/docker-compose.yaml
regexp: (\s+)\'SERVICE_IP_ADDR\'(\s+.*)?$
replace: \1'{{ ip_address }}'\2

How to concatenate two lists of dictonaries in Ansible

I have two list of dicts:
dev_users:
- name: cs3141
key:
cs513e_key1.pub
cs513e_key2.pub
- name: ab1234
key:
ab1234.pub
- name: cd5678
key:
ab1234.pub
and
sys_admin_users:
- name: xy3141
key:
xy3141.pub
- name: cd1234
key:
cd1234.pub
- name: ef5678
key:
ef5678.pub
When I try to concatenate them:
- set_fact: users= "{{ dev_users + sys_admin_users }}"
I get this error:
ERROR! failed to combine variables, expected dicts but got a 'dict' and a 'AnsibleSequence':
{}
[{"set_fact": "users= \"{{ dev_users + sys_admin_users }}\""}]
How can I concatenate these two lists?
The problem was that I was trying to concatenate the two list in the variables section, as opposed to the task section. Tadeboro in #ansible gave me this code that worked:
---
- hosts: localhost
gather_facts: false
vars:
dev_users:
- name: cs3141
key:
cs513e_key1.pub
cs513e_key2.pub
- name: ab1234
key:
ab1234.pub
- name: cd5678
key:
ab1234.pub
sys_admin_users:
- name: xy3141
key:
xy3141.pub
- name: cd1234
key:
cd1234.pub
- name: ef5678
key:
ef5678.pub
tasks:
- name: Test
set_fact:
users: "{{ dev_users + sys_admin_users }}"
This command works fine: ansible-playbook -v x.yaml
Here is another solution, that iteratively concatenates a list of list. I like it better, because it is more general. It only works in more recent versions of ansible, because there was a bug in version 2.5 which took some time to fix.
---
# ansible-playbook -v manyx.yaml
- hosts: localhost
gather_facts: false
vars:
dev_users:
- name: cs3141
key:
cs513e_key1.pub
cs513e_key2.pub
- name: ab1234
key:
ab1234.pub
- name: cd5678
key:
ab1234.pub
sys_admin_users:
- name: xy3141
key:
xy3141.pub
- name: cd1234
key:
cd1234.pub
- name: ef5678
key:
ef5678.pub
other_users:
- name: fe9876
key:
fe9876.pub
list_of_users_list:
- "{{ dev_users }}"
- "{{ sys_admin_users }}"
- "{{ other_users }}"
all_users: []
tasks:
- name: Test
set_fact:
all_users: "{{ item + all_users }}"
loop: "{{ list_of_users_list }}"
- name: print all_users
debug:
msg: "{{ item }}"
loop: "{{ all_users }}"

Convert a list to dictionary - Ansible YAML

I have a playbook, where I get an error message
fatal: [localhost]: FAILED! => {"ansible_facts": {"tasks": {}}, "ansible_included_var_files": [], "changed": false, "message": "/home/user/invoke_api/automation/tmp/task.yml must be stored as a dictionary/hash"}
task.yml
The file task.yml is dynamically created, and filtered all the time from another source to give the output below.
- key: gTest101
value:
Comments: FWP - Testing this
IP: 10.1.2.3
Name: gTest101
- key: gTest102
value:
Comments: FWP - Applying this
IP: 10.1.2.4
Name: gTest102
Question: How do I convert the list in my task.yml to a dictionary? What's the code to convert from list to dictionary
playbook.yml
---
- name: Global Objects
hosts: check_point
connection: httpapi
gather_facts: False
vars_files:
- 'credentials/my_var.yml'
- 'credentials/login.yml'
tasks:
- name: read-new-tmp-file
include_vars:
file: tmp/task.yml
name: tasks
register: new_host
- name: add-host-object-to-group
check_point.mgmt.cp_mgmt_host:
name: "{{ item.value.Name | quote }}"
ip_address: "{{ item.value.IP | quote }}"
comments: "{{ item.value.Comments }}"
groups: gTest1A
state: present
auto_publish_session: yes
loop: "{{ new_host.dict | dict2items }}"
delegate_to: Global
ignore_errors: yes
Ansible core 2.9.13
python version = 2.7.17
Q: "How do I convert the list in my task.yml to a dictionary?"
A: Use items2dict. For example, read the file and create the list
- set_fact:
l1: "{{ lookup('file', 'task.yml')|from_yaml }}"
gives
l1:
- key: gTest101
value:
Comments: FWP - Testing this
IP: 10.1.2.3
Name: gTest101
- key: gTest102
value:
Comments: FWP - Applying this
IP: 10.1.2.4
Name: gTest102
Then, the task below
- set_fact:
d1: "{{ l1|items2dict }}"
creates the dictionary
d1:
gTest101:
Comments: FWP - Testing this
IP: 10.1.2.3
Name: gTest101
gTest102:
Comments: FWP - Applying this
IP: 10.1.2.4
Name: gTest102
A vars file is a yaml dict file, so your list have to be a fields of a vars:
my_vars:
- Comments: FWP - Testing this
IP: 10.1.2.3
Name: gTest101
- Comments: FWP - Applying this
IP: 10.1.2.4
Name: gTest102
and you dont need to register the include_vars tasks. Just loop on the list var name (here my_vars).
---
- name: Global Objects
hosts: check_point
connection: httpapi
gather_facts: False
vars_files:
- 'credentials/my_var.yml'
- 'credentials/login.yml'
tasks:
- name: read-new-tmp-file
include_vars:
file: tmp/task.yml
- name: add-host-object-to-group
check_point.mgmt.cp_mgmt_host:
name: "{{ item.Name }}"
ip_address: "{{ item.IP }}"
comments: "{{ item.Comments }}"
groups: gTest1A
state: present
auto_publish_session: yes
loop: "{{ my_vars }}"
delegate_to: Global
ignore_errors: yes

Ansible - remove user from group x if already in other group

I am looking for the easiest way to remove users from group x when they are already in group google-sudo. I store users in group vars in this kind of list and dictionary combination:
user_account:
- name: jenny
authorized_keys:
- jenny_01
groups:
- "{% if not googlesudo.stat.exists %}sudo{% else %}google-sudo{% endif %}"
- name: jerry
authorized_keys:
- jerry_01
groups:
- "{% if not googlesudo.stat.exists %}sudo{% else %}google-sudo{% endif %}"
...
These are tasks I already created:
- name: Check if google-sudo file exists
stat:
path: /etc/sudoers.d/google_sudo
register: googlesudo
tags:
- add_user_group
- remove_from_x
- debug: var=googlesudo verbosity=2
tags:
- add_user_group
- remove_from_x
- debug:
msg: "User account to create: {{ item.name }}"
with_items: "{{ user_account }}"
- name: "Creating user"
user:
name: "{{ item.name }}"
group: users
shell: /bin/bash
with_items: "{{ user_account }}"
- name: Add user to additional groups
user:
name: "{{ item.0.name }}"
groups: "{{ item.1 }}"
append: yes
with_subelements:
- "{{ user_account }}"
- groups
tags:
- remove_from_x
- name: Check if user already in google-sudo
command: "groups {{ item.name }}"
with_items:
- "{{ user_account }}"
register: root_users
tags:
- remove_from_x
- name: View root users
debug:
msg: "{{ item }}"
verbosity: 2
with_items:
- "{{ root_users }}"
tags:
- remove_from_x
- name: Save state
set_fact:
is_in_googlesudo: "{{ root_users.results.0.stdout_lines }}"
tags:
- remove_from_x
- name: List
debug: msg='{{ user_account |json_query("[?groups==`google-sudo`]")}}'
tags:
- remove_from_x
- name: Remove from x group
shell: "deluser {{ item.name }} x"
with_items:
- "{{ user_account }}"
when: "'x in is_in_googlesudo' and 'google-sudo in is_in_googlesudo'"
tags:
- remove_from_x
I was testing json_query to extract user name if he is in google-sudo and x group, but without success. Tried to list users when group is defined, however I get empty output, using this:
msg='{{ user_account |json_query("[?groups==`google-sudo`]")}}'
I wonder if is there any shorter way to remove users from group x after checking on server (using eg. groups command) or in user_account -> group if he's already in google-sudo.
Probably there is some nice and elegant way to write this code, I will appreciate any ideas how to deal with it.
Let's simplify the data, e.g.
vars:
my_group: "{{ googlesudo.stat.exists|ternary('google-sudo', 'sudo') }}"
user_account:
- name: jenny
groups: "{{ my_group }}"
- name: jerry
groups: "{{ my_group }}"
The code works as expected, e.g.
- stat:
path: /tmp/google_sudo
register: googlesudo
- debug:
var: user_account
- debug:
msg: '{{ user_account|json_query("[?groups==`google-sudo`].name") }}'
gives if /tmp/google_sudo exists
user_account:
- groups: google-sudo
name: jenny
- groups: google-sudo
name: jerry
msg:
- jenny
- jerry
otherwise
user_account:
- groups: sudo
name: jenny
- groups: sudo
name: jerry
msg: []
Q: "Remove user from group x if already in another group."
A: Use getent to get the list of users in a group. For example, to list users in the group sudo
- getent:
database: group
- debug:
var: getent_group.sudo.2
Test action if a user is a member of the group sudo, e.g.
- debug:
msg: "Remove user {{ item.name }} from group x"
loop: "{{ user_account }}"
when: item.name in getent_group.sudo.2
Q: "Have groups as a list. (Probably I should use contains function.)"
A: Yes. Use contains, e.g given the lists
vars:
user_account:
- name: jenny
groups: [google-sudo, group2]
- name: jerry
groups: [google-sudo, group3]
the task
- debug:
msg: "{{ user_account|json_query('[?contains(groups, `group2`)].name') }}"
gives
msg:
- jenny

Using Ansible playbook to create instances in google cloud (gcp)

I'm using the following code
- name: create a instance
gcp_compute_instance:
name: test_object
machine_type: n1-standard-1
disks:
- auto_delete: 'false'
boot: 'true'
source: "{{ disk }}"
metadata:
startup-script-url:
cost-center:
labels:
environment: production
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
zone: us-central1-a
project: test-12y38912634812648
auth_kind: serviceaccount
service_account_file: "~/programming/gcloud/test-1283891264812-8h3981f3.json"
state: present
and I saved the file as create2.yml
Then I run Ansible-playbook create2.yml and I get the following error
ERROR! 'gcp_compute_instance' is not a valid attribute for a Play
The error appears to be in '/Users/xxx/programming/gcloud-test/create2.yml': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: create a instance
^ here
I followed the documentation. What am I doing wrong and how do I fix it?
You haven't created a playbook, you've just created a file with a task which won't run on it's own as you've discovered.
A playbook is a collection of tasks. You should start with the playbook documentation:
Playbook Documentation
For GCP, here's a working example to create a network, external IP, disk and VM.
- name: 'Deploy gcp vm'
hosts: localhost
connection: local
become: false
gather_facts: no
vars:
gcp_project: "671245944514"
gcp_cred_kind: "serviceaccount"
gcp_cred_file: "/tmp/test-project.json"
gcp_region: "us-central1"
gcp_zone: "us-central1-a"
# Roles & Tasks
tasks:
- name: create a disk
gcp_compute_disk:
name: disk-instance
size_gb: 50
source_image: projects/ubuntu-os-cloud/global/images/family/ubuntu-2004-lts
zone: "{{ gcp_zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: disk
- name: create a network
gcp_compute_network:
name: network-instance
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: network
- name: create a address
gcp_compute_address:
name: address-instance
region: "{{ gcp_region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: address
- name: create a instance
gcp_compute_instance:
name: vm-instance
project: "{{ gcp_project }}"
zone: "{{ gcp_zone }}"
machine_type: n1-standard-1
disks:
- auto_delete: 'true'
boot: 'true'
source: "{{ disk }}"
labels:
environment: testing
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
auth_kind: serviceaccount
service_account_file: "{{ gcp_cred_file }}"
state: present