I would like to change DNS IP address from 192.168.86.14 to 192.168.86.16 in Ubuntu netplan yaml file:
link: ens3
addresses: [192.168.86.12/24]
gateway4: 192.168.86.1
nameservers:
addresses: [192.168.86.14,8.8.8.8,8.8.4.4]
Here is my ansible playbook:
- name: test
ansible.builtin.replace:
path: /etc/netplan/00-installer-config.yaml
regexp: '(addresses: \[)+192.168.86.14,'
replace: '\1192.168.86.16,'
My playbook doesn't change anything in the file. Tried to escape comma but doesn't match anything as well.
For some reason I need to make sure the IP address is between "addresses [" and "," so I can't just use the syntax like this :
- name: test
ansible.builtin.replace:
path: /etc/netplan/00-installer-config.yaml
regexp: '192.168.86.14'
replace: '192.168.86.16'
I am very new to Ansible, any help is appreciated!
The dictionaries are immutable in YAML. But, you can update dictionaries in Jinja2. Let's take a complete example of a netplan configuration file, e.g.
shell> cat 00-installer-config.yaml
network:
version: 2
renderer: networkd
ethernets:
ens3:
mtu: 9000
enp3s0:
link: ens3
addresses: [192.168.86.12/24]
gateway4: 192.168.86.1
nameservers:
addresses: [192.168.86.14,8.8.8.8,8.8.4.4]
Read the dictionary into a variable
- include_vars:
file: 00-installer-config.yaml
name: netplan_conf
gives
netplan_conf:
network:
ethernets:
enp3s0:
addresses:
- 192.168.86.12/24
gateway4: 192.168.86.1
link: ens3
nameservers:
addresses:
- 192.168.86.14
- 8.8.8.8
- 8.8.4.4
ens3:
mtu: 9000
renderer: networkd
version: 2
Create a template that updates the nameservers
shell> cat 00-installer-config.yaml.j2
{% set _dummy = netplan_conf.network.ethernets.enp3s0.nameservers.update({'addresses': _addresses}) %}
{{ netplan_conf|to_nice_yaml }}
The task below
- template:
src: 00-installer-config.yaml.j2
dest: 00-installer-config.yaml
vars:
_addresses: "{{ netplan_conf.network.ethernets.enp3s0.nameservers.addresses|
regex_replace('192.168.86.14', '192.168.86.16') }}"
will update the configuration file
shell> cat 00-installer-config.yaml
network:
ethernets:
enp3s0:
addresses:
- 192.168.86.12/24
gateway4: 192.168.86.1
link: ens3
nameservers:
addresses:
- 192.168.86.16
- 8.8.8.8
- 8.8.4.4
ens3:
mtu: 9000
renderer: networkd
version: 2
Related
So I am trying take values from file, let's call it "test.yaml"
file looks like this (sorry for long output, but it is the shortest cut containing all patterns and structure):
---
results:
- failed: false
item: XXX.XX.XX.XX
invocation:
module_args:
validate_certs: false
vm_type: vm
show_tag: false
username: DOMAIN\domain-user
proxy_host:
proxy_port:
show_attribute: false
password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
port: XXX
folder:
hostname: XXX.XX.XX.XX
changed: false
virtual_machines:
- ip_address: XXX.XX.XX.XX
mac_address:
- XX:XX:XX:aa:XX:XX
uuid: XXXX-XX-XX-XXXX-XXXXX
guest_fullname: Red Hat Enterprise Linux X (XX-bit)
moid: vm-XXX
folder: "/DOMAIN-INTERXION/vm"
cluster:
attributes: {}
power_state: poweredOn
esxi_hostname: esx.hostname
tags: []
guest_name: VMnameXX
vm_network:
XX:XX:XX:aa:XX:XX:
ipv6:
- XX::XXX:XX:XXXX
ipv4:
- XXX.XX.XX.XX
I would like, for example to have something like:
results.invocation.virtual_machines.ip_address
results.invocation.module_args.user_name
I tried all kind of stuff but it doesn't work :)
last attempt is this:
---
- name: demo how register works
hosts: localhost
tasks:
- name: Include all .json and .jsn files in vars/all and all nested directories (2.3)
include_vars:
file: test.yml
name: vm
- name: debug
debug:
msg: "{{ item.0.item }}"
with_subelements:
- "{{ vm.results }}"
- virtual_machines
register: subelement
following your structure and after fixing some errors:
results.invocation.virtual_machines.ip_address is results[0].virtual_machines[0].ip_address
and
results.invocation.module_args.user_name is results[0].invocation.module_args.username
(results and virtual_machines are arrays, write results[0] or results.0 is same)
so a sample of playbook doing job:
- name: vartest
hosts: localhost
tasks:
- name: Include all .json and .jsn files in vars/all and all nested directories (2.3)
include_vars:
file: test.yml
name: vm
- name: ip
set_fact:
ipadress: "{{ vm.results[0].virtual_machines[0].ip_address }}"
- name: username
set_fact:
username: "{{ vm.results[0].invocation.module_args.username }}"
- name: display
debug:
msg: "ip: {{ ipadress }} and username: {{ username }}"
result:
ok: [localhost] =>
msg: 'ip: XXX.XX.XX.XX and username: DOMAIN\domain-user'
I want to use a template to create config files in /etc/network/interfaces.d/
But I got an error when I use with_items in a template task...
I'm not sure for the ansible_host.networks in with_items.
Thanks !
Inventory.yml :
proxmoxve: # group
hosts:
virtu:
networks:
internet:
interface: enp9s0
mode: manual
type: interface
openvswitch:
interface: vmbr1
mode: static
type: ovs_bridge
sauv:
networks:
internet:
interface: enp38s0
mode: manual
type: interface
openvswitch:
interface: vmbr1
mode: static
type: ovs_bridge
Playbook.yml :
---
- hosts: proxmoxve
tasks:
- name: "Install openvswitch with fresh cache"
apt:
name: openvswitch-switch
state: present
update_cache: yes
- name: "Set internet interfaces"
template:
src: templates/interfaces.j2
dest: "/etc/network.interfaces.d/{{ item.interface }}"
whith_items: "{{ ansible_host.networks }}"
Error :
ERROR! conflicting action statements: template, whith_items
The error appears to be in '/home/yanux/dev/ansible-proxmoxve/proxmoxve_config_networks.yml': line 10, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: "Set internet interface to manual"
^ here
There seems to be a typo: whith_items should be with_items.
I can run EC2 instances using the following code. The Public DNSs (IPs) will be in demogroup .
- hosts: localhost
gather_facts: no
vars_files:
- variables/vars.yml
- variables/encrypt-iam-account.yml
tasks:
- name: provision CentOS VM (EC2)
.
.
.
- name: add hosts to inventory **********
add_host:
hostname: '{{ item.public_ip }}'
groupname: demogroup
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
ansible_ssh_private_key_file: keypair-for-ansible.pem
loop: '{{ ec2.instances }}'
- hosts: demogroup
gather_facts: no
remote_user: centos
tasks:
- name: wait for SSH
.
.
.
- name: generate the key, x509 *******
expect:
command: openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
responses:
'Enter pass phrase for ca-key.pem': "12345"
'Country Name': "th"
'State or Province Name': "Bangkok"
'Locality Name': "Sukhumwit"
'Organization Name': "Kixxxx"
'Organizational Unit Name': "DevTeam"
'Common Name': "{{ hostvars[groups['demogroup']['xxxxx']] }}"
'Email Address': "sample#kixxxx.com"
no_log: false
register: mycmd
- debug:
var: mycmd
In the same YAML file, I want to get each Public DNS in order to assign to 'Common Name'
At this line 'Common Name': "{{ hostvars[groups['demogroup']['xxxxx']] }}". How to get each Public DNS (IP) from demogroup ?
Thank you in advance.
i want to create a file application.properties file and copy it over to specific location. im using template module for same. but i want to create content of the file based on comma separated input ip addresses count/values
the content of the file should is below.
conf_pg_hba_replication.connection=host replication oracle {{IP1}}/32 trust\nhost replication oracle {{IP2}}/32 trust\nhost replication oracle {{IP3}}/32.............
so i want my file to be created with dynamic content based on comma separated input ip address value.
if my input ip value is 127.0.0.1,123.123.123.123
file content should be
conf_pg_hba_replication.connection=host replication oracle 127.0.0.1/32 trust\nhost replication oracle 123.123.123.123/32 trust
so likewise i need to create contents of file.
please help me on this.
---
- name: pp
hosts: localhost
tasks:
- name: pp
template:
src: pp.j2
dest: pp.properties
newline_sequence: \n
-bash-4.2$ cat pp.j2
conf_pg_hba_replication.connection=host replication oracle {{slave_ip}}/32 trust
i pass ips list through a variable to ansible playbook like below
ansible-playbook postgres.yml -e "ips_list=ip1,ip2,ip3"
You are passing a coma separated ip list as an extra var on the ansible command line.
To acheive your requirement you need to:
transform the string containing the coma separated list to a real list with the split function.
loop over you list in your template to output the result.
This can actually be done in a single task. Given the following templates/pp.j2...
{% for slave_ip in ip_list %}
conf_pg_hba_replication.connection=host replication oracle {{ slave_ip }}/32 trust
{% endfor %}
and the following playbook...
---
- name: template our pp file
hosts: localhost
gather_facts: false
tasks:
- name: pp
template:
src: pp.j2
dest: pp.properties
vars:
ip_list: "{{ ip_list_raw.split(',') }}"
that you call like this...
ansible-playbook test.yml -e "ip_list_raw=127.0.0.1,1.2.3.4,5.6.7.8"
you get the following result
$ cat pp.properties
conf_pg_hba_replication.connection=host replication oracle 127.0.0.1/32 trust
conf_pg_hba_replication.connection=host replication oracle 1.2.3.4/32 trust
conf_pg_hba_replication.connection=host replication oracle 5.6.7.8/32 trust
Note that you can also drop the vars declaration with the split in the template task by passing directly the list of ips as a json array on the command line
ansible-playbook test.yml -e '{"ip_list":["127.0.0.1","1.2.3.4","5.6.7.8"]}'
Or even load them from an external yaml file e.g. my_ip_list.yml
---
ip_list:
- 127.0.0.1
- 1.2.3.4
- 5.6.7.8
like this:
ansible-playbook test.yml -e #my_ip_list.yml'
---
- name: check for postgresql standby
command: "{{ service }} postgres-check-standby"
ignore_errors: yes
register: check_standby
- set_fact:
is_standby: "N"
when:
- '"Error: Cannot get status of postgres server" in check_standby.stdout or "Error: Postgres-Server reports being" in check_standby.stdout'
- '" pg_last_xlog_receive_location \n-------------------------------\n \n(1 row)" in check_standby.stdout'
- set_fact:
is_standby: "Y"
when:
- '"postgres-server is slave/standby" in check_standby.stdout or "Error: Cannot get status of postgres server" in check_standby.stdout'
- '" pg_last_xlog_receive_location \n-------------------------------\n \n(1 row)" not in check_standby.stdout'
- set_fact:
is_standby: "D"
when:
- '"ERROR: postgres server is down or does not seem be standby" in check_standby.stdout'
- '"Is the server running locally and accepting" in check_standby.stderr'
- name: print if standby
command: echo "postgres-server is slave/standby"
when: is_standby == "Y"
- name: print if not standby
command: echo "postgres-server is not slave/standby"
when: is_standby == "N"
- name: print if component is down
command: echo "postgresql component is down"
when: is_standby == "D"
---
- name: check for postgresql master
command: "{{ service }} postgres-check-master"
ignore_errors: yes
register: check_master
- set_fact:
is_master: "Y"
when:
- '" pg_current_xlog_location \n--------------------------\n \n(1 row)" not in check_master.stdout'
- '"postgres-server is master" in check_master.stdout or "Cannot get status of postgres server" in check_master.stdout'
- set_fact:
is_master: "N"
when:
- '"postgres server is down or does not seem be" in check_master.stdout'
- '"ERROR: recovery is in progress" in check_master.stderr'
- set_fact:
is_master: "D"
when:
- '"postgres server is down or does not seem be" in check_master.stdout'
- '"Is the server running locally and accepting" in check_master.stderr'
- name: print if master
command: echo "postgres-server is master"
when: is_master == "Y"
- name: print if not master
command: echo "postgres-server is not master"
when: is_master == "N"
- name: print if component is down
command: echo "postgresql component is down"
when: is_master == "D"
---
- name: check postgresql status
command: "{{ service }} status"
ignore_errors: yes
register: postgresql_status
- name: start if its down
command: "{{ service }} start"
when:
- 'postgresql_status is failed or postgresql_status.rc != 0 or "apigee-service: apigee-postgresql: OK" not in postgresql_status.stderr'
- name: validate it is a standby node
include: check_standby.yml
- fail:
msg: postgres-server is not standby or down - please check"
when: is_standby != "Y"
- name: get ip address of old_master
shell: host {{OLD_MASTER_FQDN}} | awk -F"has address " '{print$2}'
ignore_errors: yes
register: old_master_ip
- name: get no of octet in ip address
shell: echo {{old_master_ip.stdout}} | sed s'/\./ /g' | wc -w
ignore_errors: yes
register: ip_octet_count
- name: echo old_master_ip
command: echo Old Master Node Ip address:- {{old_master_ip.stdout}}
when: old_master_ip.stdout != "" and ip_octet_count.stdout == "4"
- name: fail if unable to get ip address
fail:
msg: failed to get the ip address, please check and enter FQDN of old master postgres node
when: old_master_ip.stdout == "" or ip_octet_count.stdout != "4"
- name: Switching standby to master
shell: "{{ service}} promote-standby-to-master {{old_master_ip.stdout}}"
- name: validate master post swap
include: check_master.yml
- fail:
msg: postgresql is not master. please check microservice logs for error details.
when: is_master != "Y"
- name: backup existing file
command: "mv {{properties_file}} {{properties_file}}_backup{{ansible_date_time.date}}"
when: postgres_properties_file.stat.exists
- name: download properties file from Bitbucket
shell: "curl -k -v -u {{ username }}:{{ passwd }} {{POSTGRESQL_PROPERTIES_BB_URL}}?raw -o {{properties_file}}"
args:
warn: no
- name: restart postgresql component
command: "{{ service }} restart"
- name: print Outcome
command: echo "Successfully converted postgresql slave to master"
---
- name: check whether node is master node
include: check_master.yml
- name: bring down node if master is up and running
command: "{{ service }} stop"
when: is_master != "D"
- name: get ip address of new_master
shell: host {{NEW_MASTER_FQDN}} | awk -F"has address " '{print$2}'
ignore_errors: yes
register: new_master_ip
- name: get slave ip
shell: hostname -i
ignore_errors: yes
register: slave_ip
- name: get no of octet in slave ip address
shell: echo {{slave_ip.stdout}} | sed s'/\./ /g' | wc -w
ignore_errors: yes
register: slave_ip_octet_count
- name: get no of octet in new master ip address
shell: echo {{new_master_ip.stdout}} | sed s'/\./ /g' | wc -w
ignore_errors: yes
register: ip_octet_count_m
- name: echo new_master_ip and slave ip
command: "{{ item }}"
with_items:
- echo New Master Node Ip address:- {{new_master_ip.stdout}}
- echo Slave Ip address:- {{slave_ip.stdout}}
when: new_master_ip.stdout != "" and ip_octet_count_m.stdout == "4" and slave_ip.stdout != "" and slave_ip_octet_count.stdout == "4"
- name: fail if unable to get ip address
fail:
msg: failed to get the ip addresses, please check and enter FQDN of postgresql node
when: new_master_ip.stdout == "" or ip_octet_count_m.stdout != "4" or slave_ip.stdout == "" or slave_ip_octet_count.stdout != "4"
- name: create temp replication config file
copy:
dest: /tmp/setup_replication.txt
content: |
PG_MASTER={{new_master_ip.stdout}}
PG_STANDBY={{slave_ip.stdout}}
mode: 0750
- name: comment out the conf_pg_hba_replication.connection property in postgresql properties file
replace:
path: "{{properties_file}}"
regexp: 'conf_pg_hba_replication.connection='
replace: '#conf_pg_hba_replication.connection='
backup: yes
when: postgres_properties_file.stat.exists
- name: remove data folder
file:
path: /opt/apigee/data/apigee-postgresql
state: absent
- name: perform postgresql standby replication
shell: nohup {{service}} setup-replication-on-standby -f /tmp/setup_replication.txt >/tmp/standby_replication.log &
register: nohup_rep
- name: get PID of replication progress
shell: ps -eaf|grep -i setup-replication-on-standby|grep -v grep
ignore_errors: true
register: PID
- name: get PID count
shell: ps -eaf|grep -i setup-replication-on-standby|grep -v grep|wc -l
ignore_errors: true
register: PID_COUNT
- fail:
msg: There is an issue with replication - Please check the logs.
when: PID_COUNT.stdout == "0"
- name: print PID of replication process
command: "{{ item }}"
with_items:
- "echo {{nohup_rep.cmd}}"
- "echo {{PID.stdout}}"
I have my ansible task running in all my api_servers which i would restrict it to run only on one IP (one of the api_server)
I have added run_once: true but it didnt helps.
Kindly advise.
EDIT :
Will the below work? I have 10 instances of app_servers running, I want the task to run only on one app_server
run_once: true
when:
- inventory_hostname == groups['app_servers'][0]
Where my inventory file is like
[app_servers]
prod_app_[1:4]
I would write my playbook like that:
---
# Run on all api_servers
- hosts: api_servers
tasks:
- name: do something on all api_servers
#...
# Run only on one api_server e.q. api_server_01
- hosts: api_server_01
tasks:
- name: Gather data from api_server_01
#...
The other option would be to work with when: or to run the playbook with the --limit option
---
- hosts: all
tasks:
- name: do something only when on api_server_01
#...
when: inventory_hostname == "api_server_01"
EDIT:
Here you will see all the option in one example:
---
- hosts: all
tasks:
- debug: msg="run_once"
run_once: true
- debug: msg=all
- debug: msg="run on the first of the group"
when: inventory_hostname == groups['app_servers'][0]
# Option with separated hosts, this one will be faster if gather_facts is not tuned.
- hosts: app_servers[0]
tasks:
- debug: msg="run on the first of the group"
(Since I can not comment, I have to answer.)
What about delegate_to? Where you delegate the task to a specific host.
hosts: all
tasks:
- name: Install vim on specific host
package:
name: vim
state: present
delegate_to: staging_machine
Or
As #user2599522 mentioned: --limit is also an option to use:
You can also limit the hosts you target on a particular run with the --limit flag. (Patterns and ansible-playbook flags)