Ansible regexp that will change only found string instead of whole line - regex

I'm trying to setup playbook that will setup some docker services. I'm trying to pass some variables that are obtained by vars_prompt to docker-compose file.
replace:
path: files/docker-compose.yaml
regexp: 'SERVER_IP'
replace: '{{ ip_address }}'
Destination file
environment:
(...)
SERVER_IP: 'SERVICE_IP_ADDR'
(...)
Right now such task replace whole line with ip_address variable
expected result
environment:
(...)
SERVER_IP: ip_address
(...)

Instead of the modules replace or lineinfile, a more robust solution would be updating the dictionary. For example, given the file
shell> cat files/docker-compose.yaml
environment:
k1: v1
SERVER_IP: 'SERVICE_IP_ADDR'
k3: v3
and the below variables
dc_file: "{{ playbook_dir }}/files/docker-compose.yaml"
ip_address: 10.1.0.10
declare the dictionary dc_update
dc_update:
environment:
SERVER_IP: '{{ ip_address }}'
Include the content of the file into the dictionary dc
- include_vars:
file: "{{ dc_file }}"
name: dc
and declare the below variable
docker_compose: "{{ dc|combine(dc_update, recursive=true) }}"
This gives the updated configuration
docker_compose:
environment:
SERVER_IP: 10.1.0.10
k1: v1
k3: v3
Writes the updated configuration into the file
- copy:
dest: "{{ dc_file }}"
content: |
{{ docker_compose|to_nice_yaml(indent=2) }}
Running the play with the --diff options gives
TASK [copy] *******************************************************************
--- before: /export/scratch/tmp7/test-176/files/docker-compose.yaml
+++ after: /home/admin/.ansible/tmp/ansible-local-667065tpus_pfk/tmpnkohmmiz
## -1,4 +1,4 ##
environment:
+ SERVER_IP: 10.1.0.10
k1: v1
- SERVER_IP: 'SERVICE_IP_ADDR'
k3: v3
changed: [localhost]
shell> cat files/docker-compose.yaml
environment:
SERVER_IP: 10.1.0.10
k1: v1
k3: v3
Notes:
Example of a complete playbook for testing
- hosts: localhost
vars:
dc_file: "{{ playbook_dir }}/files/docker-compose.yaml"
ip_address: 10.1.0.10
dc_update:
environment:
SERVER_IP: '{{ ip_address }}'
docker_compose: "{{ dc|combine(dc_update, recursive=true) }}"
tasks:
- include_vars:
file: "{{ dc_file }}"
name: dc
- debug:
var: docker_compose
- copy:
dest: "{{ dc_file }}"
content: |
{{ docker_compose|to_nice_yaml(indent=2) }}
You can use modules replace and lineinfile. See the examples below on how to get the expected result by matching the key SERVER_IP, the value SERVICE_IP_ADDR, or both. All options below give the same result
--- before: /export/scratch/tmp7/test-176/files/docker-compose.yaml (content)
+++ after: /export/scratch/tmp7/test-176/files/docker-compose.yaml (content)
## -1,4 +1,4 ##
environment:
k1: v1
- SERVER_IP: 'SERVICE_IP_ADDR'
+ SERVER_IP: '10.1.0.10'
k3: v3
changed: [localhost]
- hosts: localhost
vars:
dc_file: "{{ playbook_dir }}/files/docker-compose.yaml"
ip_address: 10.1.0.10
tasks:
- replace:
path: "{{ dc_file }}"
regexp: SERVICE_IP_ADDR
replace: "{{ ip_address }}"
when: replace_by_value|d(false)|bool
- replace:
path: "{{ dc_file }}"
regexp: "SERVER_IP:.*"
replace: "SERVER_IP: '{{ ip_address }}'"
when: replace_by_key|d(false)|bool
- replace:
path: "{{ dc_file }}"
regexp: "SERVER_IP: \\'SERVICE_IP_ADDR\\'"
replace: "SERVER_IP: '{{ ip_address }}'"
when: replace_by_kv|d(false)|bool
- lineinfile:
backrefs: true
path: "{{ dc_file }}"
regexp: "^(.*)\\'SERVICE_IP_ADDR\\'$"
line: "\\1'{{ ip_address }}'"
when: lineinfile_by_value|d(false)|bool
- lineinfile:
backrefs: true
path: "{{ dc_file }}"
regexp: "^(\\s*)SERVER_IP:.*$"
line: "\\1SERVER_IP: '{{ ip_address }}'"
when: lineinfile_by_key|d(false)|bool
- lineinfile:
backrefs: true
path: "{{ dc_file }}"
regexp: "^(\\s*)SERVER_IP: \\'SERVICE_IP_ADDR\\'$"
line: "\\1SERVER_IP: '{{ ip_address }}'"
when: lineinfile_by_kv|d(false)|bool

After some test I've found solution for that.
replace:
path: files/docker-compose.yaml
regexp: (\s+)\'SERVICE_IP_ADDR\'(\s+.*)?$
replace: \1'{{ ip_address }}'\2

Related

ansible - need output in csv in multiple columns

I have playbook as below:
tasks:
- name: To Get list of all ACM
aws_acm_info:
region: "{{ region }}"
register: acm
- name: Cname Names
set_fact:
cname: "{{ acm | json_query(jmesquery) }}"
vars:
jmesquery: 'certificates[*].domain_validation_options[].resource_record.name'
- name: Cname Values
set_fact:
value: "{{ acm | json_query(jmesquery) }}"
vars:
jmesquery: 'certificates[*].domain_validation_options[].resource_record.value'
- name: set file header
shell: echo 'cname, value'> {{ path }}
run_once: true
- name: CSV - Write information into .csv file
lineinfile:
insertafter: ','
dest: "{{ path }}"
line: "{{ item }}"
with_items:
- "{{ cname }}"
- "{{ value }}"
I am getting output in single column as cname, but I need values in 2nd column as value.
required output format world be
enter image description here
I really appreciate any help you can provide.
Given the data below for testing
acm:
certificates:
- domain_validation_options:
- resource_record: {name: aaa, value: 1}
- domain_validation_options:
- resource_record: {name: bbb, value: 2}
- domain_validation_options:
- resource_record: {name: ccc, value: 3}
Get the name/value pairs in a single query
_query: 'certificates[*].domain_validation_options[].[resource_record.name,
resource_record.value]'
csv: "{{ acm|json_query(_query)|map('join', ',')|list }}"
give
csv:
- aaa,1
- bbb,2
- ccc,3
You can write the lines into a file
- lineinfile:
dest: /tmp/cname.csv
line: "{{ item }}"
loop: "{{ csv }}"
gives
shell> cat /tmp/cname.csv
aaa,1
bbb,2
ccc,3
Example of a complete playbook
- hosts: localhost
vars:
acm:
certificates:
- domain_validation_options:
- resource_record: {name: aaa, value: 1}
- domain_validation_options:
- resource_record: {name: bbb, value: 2}
- domain_validation_options:
- resource_record: {name: ccc, value: 3}
_query: 'certificates[*].domain_validation_options[].[resource_record.name,
resource_record.value]'
csv: "{{ acm|json_query(_query)|map('join', ',')|list }}"
tasks:
- lineinfile:
dest: /tmp/cname.csv
line: "{{ item }}"
loop: "{{ csv }}"
Read the file
- hosts: localhost
tasks:
- community.general.read_csv:
path: /tmp/cname.csv
fieldnames: cname,value
delimiter: ','
register: cname
- debug:
var: cname.list
gives
cname.list:
- {cname: aaa, value: '1'}
- {cname: bbb, value: '2'}
- {cname: ccc, value: '3'}
You can see that the values are strings now. If you want to keep values as integers store the data in YAML format
- hosts: localhost
vars:
acm:
certificates:
- domain_validation_options:
- resource_record: {name: aaa, value: 1}
- domain_validation_options:
- resource_record: {name: bbb, value: 2}
- domain_validation_options:
- resource_record: {name: ccc, value: 3}
_query: 'certificates[*].domain_validation_options[].[resource_record.name,
resource_record.value]'
csv: "{{ acm|json_query(_query)|map('join', ': ')|list }}"
tasks:
- lineinfile:
dest: /tmp/cname.yml
line: "{{ item }}"
loop: "{{ csv }}"
gives
shell> cat /tmp/cname.yml
aaa: 1
bbb: 2
ccc: 3
Read the file
- hosts: localhost
tasks:
- include_vars:
file: /tmp/cname.yml
name: cname_dict
- debug:
var: cname_dict
gives
cname_dict:
aaa: 1
bbb: 2
ccc: 3

Convert a list to dictionary - Ansible YAML

I have a playbook, where I get an error message
fatal: [localhost]: FAILED! => {"ansible_facts": {"tasks": {}}, "ansible_included_var_files": [], "changed": false, "message": "/home/user/invoke_api/automation/tmp/task.yml must be stored as a dictionary/hash"}
task.yml
The file task.yml is dynamically created, and filtered all the time from another source to give the output below.
- key: gTest101
value:
Comments: FWP - Testing this
IP: 10.1.2.3
Name: gTest101
- key: gTest102
value:
Comments: FWP - Applying this
IP: 10.1.2.4
Name: gTest102
Question: How do I convert the list in my task.yml to a dictionary? What's the code to convert from list to dictionary
playbook.yml
---
- name: Global Objects
hosts: check_point
connection: httpapi
gather_facts: False
vars_files:
- 'credentials/my_var.yml'
- 'credentials/login.yml'
tasks:
- name: read-new-tmp-file
include_vars:
file: tmp/task.yml
name: tasks
register: new_host
- name: add-host-object-to-group
check_point.mgmt.cp_mgmt_host:
name: "{{ item.value.Name | quote }}"
ip_address: "{{ item.value.IP | quote }}"
comments: "{{ item.value.Comments }}"
groups: gTest1A
state: present
auto_publish_session: yes
loop: "{{ new_host.dict | dict2items }}"
delegate_to: Global
ignore_errors: yes
Ansible core 2.9.13
python version = 2.7.17
Q: "How do I convert the list in my task.yml to a dictionary?"
A: Use items2dict. For example, read the file and create the list
- set_fact:
l1: "{{ lookup('file', 'task.yml')|from_yaml }}"
gives
l1:
- key: gTest101
value:
Comments: FWP - Testing this
IP: 10.1.2.3
Name: gTest101
- key: gTest102
value:
Comments: FWP - Applying this
IP: 10.1.2.4
Name: gTest102
Then, the task below
- set_fact:
d1: "{{ l1|items2dict }}"
creates the dictionary
d1:
gTest101:
Comments: FWP - Testing this
IP: 10.1.2.3
Name: gTest101
gTest102:
Comments: FWP - Applying this
IP: 10.1.2.4
Name: gTest102
A vars file is a yaml dict file, so your list have to be a fields of a vars:
my_vars:
- Comments: FWP - Testing this
IP: 10.1.2.3
Name: gTest101
- Comments: FWP - Applying this
IP: 10.1.2.4
Name: gTest102
and you dont need to register the include_vars tasks. Just loop on the list var name (here my_vars).
---
- name: Global Objects
hosts: check_point
connection: httpapi
gather_facts: False
vars_files:
- 'credentials/my_var.yml'
- 'credentials/login.yml'
tasks:
- name: read-new-tmp-file
include_vars:
file: tmp/task.yml
- name: add-host-object-to-group
check_point.mgmt.cp_mgmt_host:
name: "{{ item.Name }}"
ip_address: "{{ item.IP }}"
comments: "{{ item.Comments }}"
groups: gTest1A
state: present
auto_publish_session: yes
loop: "{{ my_vars }}"
delegate_to: Global
ignore_errors: yes

Ansible - remove user from group x if already in other group

I am looking for the easiest way to remove users from group x when they are already in group google-sudo. I store users in group vars in this kind of list and dictionary combination:
user_account:
- name: jenny
authorized_keys:
- jenny_01
groups:
- "{% if not googlesudo.stat.exists %}sudo{% else %}google-sudo{% endif %}"
- name: jerry
authorized_keys:
- jerry_01
groups:
- "{% if not googlesudo.stat.exists %}sudo{% else %}google-sudo{% endif %}"
...
These are tasks I already created:
- name: Check if google-sudo file exists
stat:
path: /etc/sudoers.d/google_sudo
register: googlesudo
tags:
- add_user_group
- remove_from_x
- debug: var=googlesudo verbosity=2
tags:
- add_user_group
- remove_from_x
- debug:
msg: "User account to create: {{ item.name }}"
with_items: "{{ user_account }}"
- name: "Creating user"
user:
name: "{{ item.name }}"
group: users
shell: /bin/bash
with_items: "{{ user_account }}"
- name: Add user to additional groups
user:
name: "{{ item.0.name }}"
groups: "{{ item.1 }}"
append: yes
with_subelements:
- "{{ user_account }}"
- groups
tags:
- remove_from_x
- name: Check if user already in google-sudo
command: "groups {{ item.name }}"
with_items:
- "{{ user_account }}"
register: root_users
tags:
- remove_from_x
- name: View root users
debug:
msg: "{{ item }}"
verbosity: 2
with_items:
- "{{ root_users }}"
tags:
- remove_from_x
- name: Save state
set_fact:
is_in_googlesudo: "{{ root_users.results.0.stdout_lines }}"
tags:
- remove_from_x
- name: List
debug: msg='{{ user_account |json_query("[?groups==`google-sudo`]")}}'
tags:
- remove_from_x
- name: Remove from x group
shell: "deluser {{ item.name }} x"
with_items:
- "{{ user_account }}"
when: "'x in is_in_googlesudo' and 'google-sudo in is_in_googlesudo'"
tags:
- remove_from_x
I was testing json_query to extract user name if he is in google-sudo and x group, but without success. Tried to list users when group is defined, however I get empty output, using this:
msg='{{ user_account |json_query("[?groups==`google-sudo`]")}}'
I wonder if is there any shorter way to remove users from group x after checking on server (using eg. groups command) or in user_account -> group if he's already in google-sudo.
Probably there is some nice and elegant way to write this code, I will appreciate any ideas how to deal with it.
Let's simplify the data, e.g.
vars:
my_group: "{{ googlesudo.stat.exists|ternary('google-sudo', 'sudo') }}"
user_account:
- name: jenny
groups: "{{ my_group }}"
- name: jerry
groups: "{{ my_group }}"
The code works as expected, e.g.
- stat:
path: /tmp/google_sudo
register: googlesudo
- debug:
var: user_account
- debug:
msg: '{{ user_account|json_query("[?groups==`google-sudo`].name") }}'
gives if /tmp/google_sudo exists
user_account:
- groups: google-sudo
name: jenny
- groups: google-sudo
name: jerry
msg:
- jenny
- jerry
otherwise
user_account:
- groups: sudo
name: jenny
- groups: sudo
name: jerry
msg: []
Q: "Remove user from group x if already in another group."
A: Use getent to get the list of users in a group. For example, to list users in the group sudo
- getent:
database: group
- debug:
var: getent_group.sudo.2
Test action if a user is a member of the group sudo, e.g.
- debug:
msg: "Remove user {{ item.name }} from group x"
loop: "{{ user_account }}"
when: item.name in getent_group.sudo.2
Q: "Have groups as a list. (Probably I should use contains function.)"
A: Yes. Use contains, e.g given the lists
vars:
user_account:
- name: jenny
groups: [google-sudo, group2]
- name: jerry
groups: [google-sudo, group3]
the task
- debug:
msg: "{{ user_account|json_query('[?contains(groups, `group2`)].name') }}"
gives
msg:
- jenny

Ansible 2.5.2, AWS Lambda, Want to create a template that works whether or not I have subnets or security groups assigned

I'm using Ansible 2.5.2 to try and automate deployment of Lambda into AWS.
How do I create template so that if the security groups section is blank, the code deploys?
Below results in the error EC2 Error Message: The subnet ID '' does not exist
---
- name: Deploy and Update Lambda
hosts: localhost
gather_facts: no
connection: local
tasks:
- name: Lambda Deploy
lambda:
profile: "{{ profile }}"
name: '{{ item.name }}'
state: present #absent or present
zip_file: '{{ item.zip_file }}'
runtime: 'python2.7'
role: '{{ item.role }}'
handler: 'hello_python.my_handler'
vpc_subnet_ids: '{{ item.vpc_subnet_ids }}'
vpc_security_group_ids: '{{ item.vpc_security_group_ids }}'
environment_variables: '{{ item.env_vars }}'
tags: "{{ item.tags }}"
with_items:
- name: AnsibleTest
role: 'arn:aws:iam::xxxxxxxxxx:role/Dev-LambdaRole'
zip_file: hello-code.zip
vpc_subnet_ids:
# - subnet-080802e6660be744c
# - subnet-00a8380a28ae0528c
# - subnet-0723ad3c29a435ee0
vpc_security_group_ids:
- sg-0fa788da8ecd36fe5
env_vars:
key1: "first"
key2: "second"
tags:
x: "133"
xx: "1"
project-name: "x"
xxx: "Ansible"
app-function: "automation"
Name: "AnsibleTest"

Iterating in a with_items loop ansible

Hi there i am atm trying to get the hang of ansible with aws and i am really likeing the flexibility of it so far.
Sadly now i am starting to hit a brick wall with my experiments. I am trying to tag volumes for specific instances that have the tag environment: test with the tages environment: test and backup: true . The playbook is working as intended if i specify every single Index of the array in the with_items loop. Here's my playbook so far:
---
- name: Tag the EBS Volumes
hosts: tag_environment_test
gather_facts: False
tags: tag
vars_files:
- /etc/ansible/vars/aws.yml
tasks:
- name: Gather instance instance_ids
local_action:
module: ec2_remote_facts
region: '{{ aws_region }}'
filters:
instance-state-name: running
"tag:environment": test
register: test_id
- name: Gather volume information for instance
local_action:
module: ec2_vol
region: '{{ aws_region }}'
instance: "{{ item.id }}"
state: list
with_items:
- "{{ test_id.instances }}"
register: ec2_volumes
- debug:
var: ec2_volumes
- name: Do some actual tagging
local_action:
module: ec2_tag
region: '{{ aws_region }}'
resource: "{{ item.id }}"
args:
tags:
environment: test
backup: true
with_items:
- "{{ ec2_volumes.results[0].volumes }}"
# - "{{ ec2_volumes.results[1].volumes }}"
My question now is it possible to iterate over the full array in ec2_volumes.results without having the need to specify every single value in the array. Like for example _ec2_volumes.results[X].volumes X=X+1_ so every time he goes through the loop he iterates +1 until the end of the array.
Every Input also on the rest of the playbook would be very appriciated (like i said still trying to get the hang of ansible. :)
Greeting
Drees
You can iterate over your list of results:
- name: Do some actual tagging
delegate_to: localhost
ec2_tag:
region: '{{ aws_region }}'
resource: "{{ item.volume.id }}"
tags:
environment: test
backup: true
with_items: "{{ ec2_volumes.results }}"
Every Input also on the rest of the playbook would be very appreciated
Consider using delegate_to: localhost instead of local_action. Consider this task:
- name: an example
command: do something
Using delegate_to, I only need to add a single line:
- name: an example
delegate_to: localhost
command: do something
Whereas using local_action I need to rewrite the task:
- name: an example
local_action:
module: command do something
And of course, delegate_to is more flexible: you can use it to delegate to hosts other than localhost.
Update
Without seeing your actual playbook it's hard to identify the source of the error. Here's a complete playbook that runs successfully (using synthesized data and wrapping your ec2_tag task in a debug task):
---
- hosts: localhost
gather_facts: false
vars:
aws_region: example
ec2_volumes:
results:
- volume:
id: 1
- volume:
id: 2
tasks:
- name: Do some actual tagging
debug:
msg:
ec2_tag:
region: '{{ aws_region }}'
resource: '{{ item.volume.id }}'
tags:
environment: test
backup: true
with_items: "{{ ec2_volumes.results }}"