Is it possible to loop into two different lists in the same playbook (Ansible)? - list

I'm writing a Playbook Ansible and I want to loop into two different lists.
I now that I can use with_items to loop in a list, but can I use with_items twice in the same playbook?
Here is what I want to do:
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item }}"
deploy: false
fabric: "{{ item }}"
state: merged
with_items:
- "{{ attachs }}"
"{{ fabric }}"
register: networks
So for the first call, I want to use the playbook with fabric[0] and attachs[0].
For the second call, I want to use the playbook with fabric[1] and attachs[1].
And so on...
Can someone help me please?

What you are looking to achieve is what was with_together and that is, now, recommanded to achieve with the zip filter.
So: loop: "{{ attachs | zip(fabric) | list }}".
Where the element of the first list (attachs) would be item.0 and the element of the second list (fabric) would be item.1.
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item.0 }}"
deploy: false
fabric: "{{ item.1 }}"
state: merged
loop: "{{ attachs | zip(fabric) | list }}"
register: networks

Related

Ansible "ec2_asg" module keeps changing the desired/min/max capacity settings

For cost reasons, our ASG's in the QA environment run with desired/min/max capacity set to "1". That's not the case for Production but since we use the same code for QA and Prod deployment (minus a few variables of course) this is causing problems with the QA automation jobs.
- name: create autoscale groups original_lc
ec2_asg:
name: "{{ app_name }}"
target_group_arns: "{{alb_target_group_facts.target_groups[0].target_group_arn}}"
launch_config_name: "{{ launch_config.name }}"
min_size: 1
max_size: 1
desired_capacity: 1
region: "{{ region }}"
vpc_zone_identifier: "{{ subnets | join(',') }}"
health_check_type: "{{health_check}}"
replace_all_instances: yes
wait_for_instances: false
replace_batch_size: '{{ rollover_size }}'
lc_check: false
default_cooldown: "{{default_cooldown}}"
health_check_period: "{{health_check_period}}"
notification_topic: "{{redeem_notification_group}}"
tags:
- Name : "{{ app_name }}"
- Application: "{{ tag_Application }}"
- Product_Owner: "{{ tag_Product_Owner }}"
- Resource_Owner: "{{ tag_Resource_Owner }}"
- Role: "{{ tag_Role }}"
- Service_Category: "{{ tag_Service_Category }}"
register: asg_original_lc
On the first run, the "ec2_asg" module creates the group properly, with the correct desired/min/max settings.
But when we run the job a second time to update the same ASG, it changes desired/min/max to "2" in AWS. We don't want that. We just want it to rotate out that one instance in the group. Is there a way to achieve that?

ansible list files in a directory

Can somone explain to me why this doesn't work? I want to get a list of files within a directory and use it as an input for the loop.
---
tasks:
- set_fact:
capabilities: []
- name: find CE_Base capabilities
find:
paths: /opt/netsec/ansible/orchestration/capabilities/CE_BASE
patterns: '*.yml'
register: CE_BASE_capabilities
- name: debug_files
debug:
msg: "{{ item.path }}"
with_items: "{{ CE_BASE_capabilities.files }}"
- set_fact:
thispath: "{{ item.path }}"
capabilities: "{{ capabilities + [ thispath ] }}"
with_items: "{{ CE_BASE_capabilities.files }}"
- name: Include CE_BASE
include_tasks: /opt/netsec/ansible/orchestration/process_capabilities_CE_BASE.yml
loop: "{{ capabilities }}"
Edit:
This code is attempting to create a list called capabilties, which contatins a list of files in a particular directory.
When i ran this code without trying to get the files automatically, it looked like this.
- hosts: localhost
vars:
CE_BASE_capabilities:
- '/opt/netsec/ansible/orchestration/capabilities/CE_BASE/CE_BASE_1.yml'
- '/opt/netsec/ansible/orchestration/capabilities/CE_BASE/CE_BASE_2.yml'
- name: Include CE_BASE
include_tasks: /opt/netsec/ansible/orchestration/process_capabilities_CE_BASE.yml
loop: "{{ CE_BASE_capabilities }}"
Don't define thispath as a fact but as a local vars in the set_fact task. Beside that, you don't need to init capabilities if you use the default filter.
- vars:
thispath: "{{ item.path }}"
set_fact:
capabilities: "{{ capabilities | default([]) + [ thispath ] }}"
with_items: "{{ CE_BASE_capabilities.files }}"
Moreover, you don't even need to loop. You can extract the info directly from the existing result:
- set_fact:
capabilities: "{{ CE_BASE_capabilities.files | map(attribute='path') | list }}"

Ansible in AWS, list processing question using ec2_instance_info for several nodes

I am running several Ansible playbooks in AWS and I am having difficulty with a test yaml file. The purpose of the yaml file is to query AWS for a list of servers using a filter and set_fact the instance name, the instance ID the instance size and the private IP.
The code I have is returning only data for the first node in the list and repeats the debug line every 12 lines. All the other lines show no data returned. I am using the ec2_instance_info to get the various data about the instances.
Here is the Ansible yaml file
---
# This script gathers the Instance ID's et. al.
- name: Get EC2 Info
ec2_instance_info:
region: '{{ aws_region }}'
aws_access_key: "{{ lookup('ini', 'aws_access_key_id section=saml file=~/.aws/credentials') }}"
aws_secret_key: "{{ lookup('ini', 'aws_secret_access_key section=saml file=~/.aws/credentials') }}"
security_token: "{{ lookup('ini', 'aws_session_token section=saml file=~/.aws/credentials') }}"
filters:
"tag:Name": "test-envMan*"
register: Instance_ID
- name: Get Instance ID
debug:
msg: "{{ item.0 }} | {{ item.1 }} | {{ item.2 }} | {{ item.3 }}"
with_together:
- "{{ Instance_ID.instances | map(attribute='tags.Name') | list }}"
- "{{ Instance_ID.instances[0].instance_id }}"
- "{{ Instance_ID.instances[1].instance_type }}"
- "{{ Instance_ID.instances[2].private_ip_address }}"
- name: Gather and Save info
set_fact:
Tag_Name: "{{ Instance_ID.instances | map(attribute='tags.Name') | list }}"
Instance_ID: "{{ Instance_ID.instances[0].instance_id }}"
Instance_Size: "{{ Instance_ID.instances[1].instance_type }}"
Instance_PrivIP: "{{ Instance_ID.instances[2].private_ip_address }}"
The output shows 12 lines of Ansible "ok" output for each server. The first line of which includes the debug output of the expected fields for the first node.
So 1 line of "ok" log output, then the debug line. Then 11 lines of "ok" log output of the same node. Then 1 line of "ok" output for the second node, the the debug line for the first node. etc.
I need to discover what I am doing incorrectly and how to make it behave.
Any comments, suggestions or pointers are appreciated.
Thanks.

Ansible - force a variable/fact to be undefined

I'm trying to run a playbook multiple times in a loop which creates AWS route53 records.
My task to create the route53 record looks like this:
- name: Create Public DNS record
route53:
profile: "{{ route53_profile_id }}"
command: "{{ dns_command }}"
zone: "{{ dns_zone }}"
record: "{{ dns_record_name }}.{{ dns_zone }}"
type: "{{ dns_type }}"
value: "{{ dns_value }}"
overwrite: "{{ dns_overwrite }}"
ttl: "{{ dns_ttl }}"
health_check: "{{ healthcheck.health_check.id | default(omit) }}"
failover: "{{ dns_setting.failover | default(omit) }}"
weight: "{{ dns_setting.weight | default(omit) }}"
region: "{{ region | default(omit) }}"
identifier: "{{ identifier | default(omit) }}"
My problem is that the health check isn't always defined every time.
Creation of the health check looks like this:
- name: Create healthcheck with IP address for EC2 instance
route53_health_check:
state: "{{ healthcheck.state | default( healthcheck_defaults.state ) }}"
profile: "{{ route53_profile_id }}"
region: "{{ vpc.region }}"
ip_address: "{{ dns_value }}"
type: "{{ healthcheck.type | default( healthcheck_defaults.type ) }}"
resource_path: "{{ healthcheck.resource_path | default( omit ) }}"
port: "{{ healthcheck.port | default( omit ) }}"
security_token: "{{ healthcheck.security_token | default( omit ) }}"
validate_certs: "{{ healthcheck.validate_certs | default( omit ) }}"
string_match: "{{ healthcheck.string_match | default( omit ) }}"
request_interval: "{{ healthcheck.request_interval | default( healthcheck_defaults.request_interval ) }}"
failure_threshold: "{{ healthcheck.failure_threshold | default( healthcheck_defaults.failure_threshold ) }}"
register: healthcheck
when:
- dns_type == "A"
- dns_setting.healthcheck is defined
If the loop runs 5 times, it may only be defined in one iteration. If the health check runs then the 'healthcheck' variable contains the details of the health check, e.g. the ID. If it does not run on a given loop, the 'healthcheck' variable contains the following:
"healthcheck": {
"changed": false,
"skip_reason": "Conditional check failed",
"skipped": true
}
In my route53 creation, the health check is omitted if the 'healthcheck' variable is undefined. However if it is defined, ansible attempts to dereference the id parameter of the health_check parameter of healthcheck, which doesn't exist.
If I try to set health check to a default value when not in use, e.g. {} then it is still defined, and my route53 creation fails.
Is there a way to force a variable or fact to be undefined? Something like:
- name: Undefine health check
set_fact:
healthcheck: undef
Try something like this:
- name: Create Public DNS record
route53:
... cut ...
health_check: "{{ healthcheck | skipped | ternary( omit, (healthcheck.health_check | default({})).id ) }}"
... cut ...
This will pass omit if healthcheck was skipped and healthcheck.health_check.id otherwise.
From my experience, default is not working properly with nested dicts of 2+ levels (i.e. works with mydict.myitem | default('ok') but fails with mydict.mysubdict.myitem | default('ok'), so I used the hack to default subdict to {} first to safely access id: (healthcheck.health_check | default({})).id

how to add a disk to vcenter guest using Ansible

I'm attempting to add a second disk to a vmware vcenter instance.
Here is what I have:
- name: "Modifying ..."
local_action:
module: vsphere_guest
vcenter_hostname: "{{ vcenter.hostname }}"
username: "{{ vcenter_user[datacenter]['username'] }}"
password: "{{ vcenter_user[datacenter]['password'] }}"
guest: "{{ inventory_hostname }}"
# Looky looky heeya ...#
state: reconfigured
########################
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: "{{ datacenter }} {{ purpose |replace('_',' ') }}"
vm_disk:
disk1:
size_gb: 50
type: thin
datastore: "{{ vcenter.datastore }}"
disk2:
size_gb: 200
type: thin
datastore: "{{ vcenter.datastore }}"
vm_hardware:
memory_mb: "{{ vm.memory|int }}"
num_cpus: "{{ vm.cpus|int }}"
osid: "{{ os.id }}"
esxi:
datacenter: "{{ esxi.datacenter }}"
hostname: "{{ esxi.hostname }}"
So the vcenter sees the reconfigure and there are no errors displayed.
Also there are no errors on the console when I runt the playbook.
It just simply does not add the second disk.
So is there a way to add the disk or will I have to write a python script to do it?
Thanks.
The function def reconfigure_vm in the vsphere_guest module does only include code for changing the RAM and the CPU. But i don't see any code for changing the other hardware. This is only possible while creating a new VM at the moment.