Ansible - force a variable/fact to be undefined - amazon-web-services

I'm trying to run a playbook multiple times in a loop which creates AWS route53 records.
My task to create the route53 record looks like this:
- name: Create Public DNS record
route53:
profile: "{{ route53_profile_id }}"
command: "{{ dns_command }}"
zone: "{{ dns_zone }}"
record: "{{ dns_record_name }}.{{ dns_zone }}"
type: "{{ dns_type }}"
value: "{{ dns_value }}"
overwrite: "{{ dns_overwrite }}"
ttl: "{{ dns_ttl }}"
health_check: "{{ healthcheck.health_check.id | default(omit) }}"
failover: "{{ dns_setting.failover | default(omit) }}"
weight: "{{ dns_setting.weight | default(omit) }}"
region: "{{ region | default(omit) }}"
identifier: "{{ identifier | default(omit) }}"
My problem is that the health check isn't always defined every time.
Creation of the health check looks like this:
- name: Create healthcheck with IP address for EC2 instance
route53_health_check:
state: "{{ healthcheck.state | default( healthcheck_defaults.state ) }}"
profile: "{{ route53_profile_id }}"
region: "{{ vpc.region }}"
ip_address: "{{ dns_value }}"
type: "{{ healthcheck.type | default( healthcheck_defaults.type ) }}"
resource_path: "{{ healthcheck.resource_path | default( omit ) }}"
port: "{{ healthcheck.port | default( omit ) }}"
security_token: "{{ healthcheck.security_token | default( omit ) }}"
validate_certs: "{{ healthcheck.validate_certs | default( omit ) }}"
string_match: "{{ healthcheck.string_match | default( omit ) }}"
request_interval: "{{ healthcheck.request_interval | default( healthcheck_defaults.request_interval ) }}"
failure_threshold: "{{ healthcheck.failure_threshold | default( healthcheck_defaults.failure_threshold ) }}"
register: healthcheck
when:
- dns_type == "A"
- dns_setting.healthcheck is defined
If the loop runs 5 times, it may only be defined in one iteration. If the health check runs then the 'healthcheck' variable contains the details of the health check, e.g. the ID. If it does not run on a given loop, the 'healthcheck' variable contains the following:
"healthcheck": {
"changed": false,
"skip_reason": "Conditional check failed",
"skipped": true
}
In my route53 creation, the health check is omitted if the 'healthcheck' variable is undefined. However if it is defined, ansible attempts to dereference the id parameter of the health_check parameter of healthcheck, which doesn't exist.
If I try to set health check to a default value when not in use, e.g. {} then it is still defined, and my route53 creation fails.
Is there a way to force a variable or fact to be undefined? Something like:
- name: Undefine health check
set_fact:
healthcheck: undef

Try something like this:
- name: Create Public DNS record
route53:
... cut ...
health_check: "{{ healthcheck | skipped | ternary( omit, (healthcheck.health_check | default({})).id ) }}"
... cut ...
This will pass omit if healthcheck was skipped and healthcheck.health_check.id otherwise.
From my experience, default is not working properly with nested dicts of 2+ levels (i.e. works with mydict.myitem | default('ok') but fails with mydict.mysubdict.myitem | default('ok'), so I used the hack to default subdict to {} first to safely access id: (healthcheck.health_check | default({})).id

Related

Ansible "ec2_asg" module keeps changing the desired/min/max capacity settings

For cost reasons, our ASG's in the QA environment run with desired/min/max capacity set to "1". That's not the case for Production but since we use the same code for QA and Prod deployment (minus a few variables of course) this is causing problems with the QA automation jobs.
- name: create autoscale groups original_lc
ec2_asg:
name: "{{ app_name }}"
target_group_arns: "{{alb_target_group_facts.target_groups[0].target_group_arn}}"
launch_config_name: "{{ launch_config.name }}"
min_size: 1
max_size: 1
desired_capacity: 1
region: "{{ region }}"
vpc_zone_identifier: "{{ subnets | join(',') }}"
health_check_type: "{{health_check}}"
replace_all_instances: yes
wait_for_instances: false
replace_batch_size: '{{ rollover_size }}'
lc_check: false
default_cooldown: "{{default_cooldown}}"
health_check_period: "{{health_check_period}}"
notification_topic: "{{redeem_notification_group}}"
tags:
- Name : "{{ app_name }}"
- Application: "{{ tag_Application }}"
- Product_Owner: "{{ tag_Product_Owner }}"
- Resource_Owner: "{{ tag_Resource_Owner }}"
- Role: "{{ tag_Role }}"
- Service_Category: "{{ tag_Service_Category }}"
register: asg_original_lc
On the first run, the "ec2_asg" module creates the group properly, with the correct desired/min/max settings.
But when we run the job a second time to update the same ASG, it changes desired/min/max to "2" in AWS. We don't want that. We just want it to rotate out that one instance in the group. Is there a way to achieve that?

Is it possible to loop into two different lists in the same playbook (Ansible)?

I'm writing a Playbook Ansible and I want to loop into two different lists.
I now that I can use with_items to loop in a list, but can I use with_items twice in the same playbook?
Here is what I want to do:
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item }}"
deploy: false
fabric: "{{ item }}"
state: merged
with_items:
- "{{ attachs }}"
"{{ fabric }}"
register: networks
So for the first call, I want to use the playbook with fabric[0] and attachs[0].
For the second call, I want to use the playbook with fabric[1] and attachs[1].
And so on...
Can someone help me please?
What you are looking to achieve is what was with_together and that is, now, recommanded to achieve with the zip filter.
So: loop: "{{ attachs | zip(fabric) | list }}".
Where the element of the first list (attachs) would be item.0 and the element of the second list (fabric) would be item.1.
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item.0 }}"
deploy: false
fabric: "{{ item.1 }}"
state: merged
loop: "{{ attachs | zip(fabric) | list }}"
register: networks

ansible list files in a directory

Can somone explain to me why this doesn't work? I want to get a list of files within a directory and use it as an input for the loop.
---
tasks:
- set_fact:
capabilities: []
- name: find CE_Base capabilities
find:
paths: /opt/netsec/ansible/orchestration/capabilities/CE_BASE
patterns: '*.yml'
register: CE_BASE_capabilities
- name: debug_files
debug:
msg: "{{ item.path }}"
with_items: "{{ CE_BASE_capabilities.files }}"
- set_fact:
thispath: "{{ item.path }}"
capabilities: "{{ capabilities + [ thispath ] }}"
with_items: "{{ CE_BASE_capabilities.files }}"
- name: Include CE_BASE
include_tasks: /opt/netsec/ansible/orchestration/process_capabilities_CE_BASE.yml
loop: "{{ capabilities }}"
Edit:
This code is attempting to create a list called capabilties, which contatins a list of files in a particular directory.
When i ran this code without trying to get the files automatically, it looked like this.
- hosts: localhost
vars:
CE_BASE_capabilities:
- '/opt/netsec/ansible/orchestration/capabilities/CE_BASE/CE_BASE_1.yml'
- '/opt/netsec/ansible/orchestration/capabilities/CE_BASE/CE_BASE_2.yml'
- name: Include CE_BASE
include_tasks: /opt/netsec/ansible/orchestration/process_capabilities_CE_BASE.yml
loop: "{{ CE_BASE_capabilities }}"
Don't define thispath as a fact but as a local vars in the set_fact task. Beside that, you don't need to init capabilities if you use the default filter.
- vars:
thispath: "{{ item.path }}"
set_fact:
capabilities: "{{ capabilities | default([]) + [ thispath ] }}"
with_items: "{{ CE_BASE_capabilities.files }}"
Moreover, you don't even need to loop. You can extract the info directly from the existing result:
- set_fact:
capabilities: "{{ CE_BASE_capabilities.files | map(attribute='path') | list }}"

Ansible in AWS, list processing question using ec2_instance_info for several nodes

I am running several Ansible playbooks in AWS and I am having difficulty with a test yaml file. The purpose of the yaml file is to query AWS for a list of servers using a filter and set_fact the instance name, the instance ID the instance size and the private IP.
The code I have is returning only data for the first node in the list and repeats the debug line every 12 lines. All the other lines show no data returned. I am using the ec2_instance_info to get the various data about the instances.
Here is the Ansible yaml file
---
# This script gathers the Instance ID's et. al.
- name: Get EC2 Info
ec2_instance_info:
region: '{{ aws_region }}'
aws_access_key: "{{ lookup('ini', 'aws_access_key_id section=saml file=~/.aws/credentials') }}"
aws_secret_key: "{{ lookup('ini', 'aws_secret_access_key section=saml file=~/.aws/credentials') }}"
security_token: "{{ lookup('ini', 'aws_session_token section=saml file=~/.aws/credentials') }}"
filters:
"tag:Name": "test-envMan*"
register: Instance_ID
- name: Get Instance ID
debug:
msg: "{{ item.0 }} | {{ item.1 }} | {{ item.2 }} | {{ item.3 }}"
with_together:
- "{{ Instance_ID.instances | map(attribute='tags.Name') | list }}"
- "{{ Instance_ID.instances[0].instance_id }}"
- "{{ Instance_ID.instances[1].instance_type }}"
- "{{ Instance_ID.instances[2].private_ip_address }}"
- name: Gather and Save info
set_fact:
Tag_Name: "{{ Instance_ID.instances | map(attribute='tags.Name') | list }}"
Instance_ID: "{{ Instance_ID.instances[0].instance_id }}"
Instance_Size: "{{ Instance_ID.instances[1].instance_type }}"
Instance_PrivIP: "{{ Instance_ID.instances[2].private_ip_address }}"
The output shows 12 lines of Ansible "ok" output for each server. The first line of which includes the debug output of the expected fields for the first node.
So 1 line of "ok" log output, then the debug line. Then 11 lines of "ok" log output of the same node. Then 1 line of "ok" output for the second node, the the debug line for the first node. etc.
I need to discover what I am doing incorrectly and how to make it behave.
Any comments, suggestions or pointers are appreciated.
Thanks.

Assign ansible vars based on AWS tags

I'm trying to figure out a way to assign variables in Ansible based on tags I have in AWS. I was experimenting with ec2_remote_tags but it's returning alot more information than I need. It seems like there should be an easier way to do this and I'm just not thinking of it.
For example, if I have a tag called function that creates the tag_function_api group using dynamic inventory and I want to assign a variable function to the value api. Any ideas on an efficient way to do this?
I've managed to make a dict of tags with lists of values:
- hosts: localhost
tasks:
- ec2_remote_facts:
region: eu-west-1
register: ec2_facts
# get all possible tag names
- set_fact: tags="{{ item.keys() }}"
with_items: "{{ ec2_facts.instances | map(attribute='tags') | list }}"
register: tmp_tags
# get flattened list of tags (for some reason lookup() returns string, so we use with_)
- assert: that=true
with_flattened: "{{ tmp_tags.results | map(attribute='ansible_facts.tags') | list }}"
register: tmp_tags
# get unique tag names
- set_fact: tags="{{ tmp_tags.results | map(attribute='item') | list | unique }}"
- set_fact: my_tags="{{ {} }}"
# get all possible values for a given tag
- set_fact:
my_tags: "{{ my_tags | combine( {''+item: ec2_facts.instances | map(attribute='tags.'+item) | select('defined') | list | unique}) }}"
with_items: "{{ tags }}"
- debug: var=my_tags
If you are using Ansible's ec2.py dynamic inventory script it makes all tags available as host variables in the form ec2_tag_<tag name> = <tag value>. It also adds all EC2 hosts to the group ec2.
So if your EC2 instance has a tag AwesomeVariable = "Greatness" and you want that value assigned to the Ansible host variable stupendous you can do this:
- name: Register variables based on tags
set_fact:
stupendous: "{{ ec2_tag_AwesomeVariable }}"
when: "'ec2' in group_names"
After this runs you can use the variable stupendous for your EC2 hosts and it has the value set for the AwesomeVariable tag.
Was able to get this to work based on some more information I found here: https://groups.google.com/forum/#!topic/ansible-project/ES2CjMPps3M
Here is the code that worked for us:
- name: Retrieve all tags on an instance
ec2_tag:
region: '{{ ec2_region }}'
resource: '{{ ec2_id }}'
state: list
aws_access_key: "{{ ANSIBLE_IAM_KEY }}"
aws_secret_key: "{{ ANSIBLE_IAM_SECRET }}"
register: ec2_facts
- name: register variables based on tag
set_fact:
tt_function: "{{ ec2_facts.tags.Function }}"
tt_release: "{{ ec2_facts.tags.Release }}"
tt_client: "{{ ec2_facts.tags.Client }}"