Ansible ec2_* modules - filters option - amazon-web-services

Is there a way to define multiple values for the filters option on some of the ec2_* modules?
For example, the following play terminates all instances with the tag:Name Testing. How would I terminate all servers with tag:Name Testing or tag:Name Staging
ec2_instance:
region: "{{ region }}"
profile: "{{ lookup( 'env', 'AWS_PROFILE' ) }}"
state: absent
filters:
tag:Name: Testing

Each filter's value can be a list:
filters:
'tag:Name':
- Testing
- Staging

Related

Is there any analogy in Ansible to AWS intrinsic function Fn::Cidr?

I can obtain a list of VPC subnets using Ansible playbook:
tasks:
- name: Gathering VPC info ...
amazon.aws.ec2_vpc_subnet_info:
region: "eu-east-1"
filters:
vpc-id: vpc-433434432aad778ad
register: output
- name: Register new var
ansible.builtin.set_fact:
cidr_list: "{{ cidr_list|default([]) + [item.cidr_block] }}"
loop: "{{ output.subnets }}"
- name: Debugger...
ansible.builtin.debug:
msg: "{{ cidr_list }}"
What I want now is to calculate all IPv4 addresses by giving a size of each subnet and the initial VPC CIDR (this is actually successfully can be done using AWS Fn::Cidr):
"Fn::Cidr" : ["10.0.0.0/16", 15, 29 ]
Which will create a list of 15 subnets where each has a mask of /29. Then my goal is to compare two lists, and if not used IPv4 found from Fn::Cidr list, then use that one.
However I was wondering is there such an Ansible module to accomplish same task as would Fn::Cidr do?
The | ipsubnet filter will do what you want, but it may require some {% for %} loops because I don't think it is designed (ootb) to do 15 subnets at a time

Create a new array with SecurityGroupIds instead of SecurityGroupNames with Ansible

I am relatively new to Ansible and I am struggling to understand how to perform the following scenario:
I have an array with AWS security group names looking like this
['Security-Group-Name1', 'SecurityGroup-Name2', 'SecurityGroup-Name3']
However, what I want is to have an array of SecurityGroupIds. Using Ansible I have the ec2_group_info as an option to retrieve information about a security group. So far so good ...
Now comes my question. I need to loop through the above array using ec2_group_info, set the name of the security group I need and return the retrieved Id into a new array so in the end I have something like this.
['Security-Group-Id1', 'SecurityGroup-Id2', 'SecurityGroup-Id3']
I know I need to use a loop with sort of a dynamic index. But it is not really clear to me how to do this in Ansible.
I am aware of the latest loop section of Ansible Docs and I find them more than confusing...
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html
Edit:
This is the current code which works as needed:
- name: Installing pip if not existing on host
pip:
name: boto3
- name: Get SecurityGroupId information
ec2_group_info:
filters:
group_name: ['SG-One', 'SG-Two']
vpc_id: 'vpc-id'
register: my_groups
- set_fact:
my_group_ids: '{{ my_groups.security_groups | map(attribute="group_id") | list }}'
- debug:
msg: "{{ my_groups }}"
- debug:
msg: "{{ my_group_ids }}"
This is the outcome:
TASK [Gathering Facts] ***************************************************
ok: [localhost]
TASK [machine-provisioning : Installing pip if not existing on host] ************
ok: [localhost]
TASK [machine-provisioning : Get SecurityGroupId information] *************************
ok: [localhost]
TASK [machine-provisioning : set_fact] *********************************
ok: [localhost]
TASK [machine-provisioning : debug] ***********************************************
ok: [localhost] => {
"msg": [
"sg-00000000",
"sg-11111111"
]
}
On that linked page about loops, you'll observe the use of register:, which is how you'd capture the result of that ec2_group_info: lookup, then use the map jinja filter to extract map(attribute="group_id") from the resulting list of results; you have to feed the output of map into the list filter, because map and a few others are python generators, and thus need a terminal action to materialize their data. The set_fact: is how ansible does "assignment"
- ec2_group_info:
filters:
group_name: '{{ the_group_names }}'
vpc_id: '{{ my_vpc_id }}'
register: my_groups
- set_fact:
my_group_ids: '{{ my_groups.security_groups | map(attribute="group_id") | list }}'
yields:
ok: [localhost] => {"ansible_facts": {"my_group_ids": ["sg-0c5c277ed1edafb54", "sg-7597a123"]}, "changed": false}

Parsing variables in ansible inventory in python

I'm trying to parse ansible variables using python specified in an inventory file like below:
[webservers]
foo.example.com type=news
bar.example.com type=sports
[dbservers]
mongodb.local type=mongo region=us
mysql.local type=mysql region=eu
I want to be able to parse type=news for host foo.example.com in webservers and type=mongo region=us for host mongodb.local under dbservers. Any help with this is greatly appreciated
The play below
- name: List type=news hosts in the group webservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['webservers'] }}"
when: hostvars[item].type == "news"
- name: List type=mongo and region=us hosts in the group dbservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['dbservers'] }}"
when:
- hostvars[item].type == "mongo"
- hostvars[item].region == "us"
gives:
"msg": "foo.example.com"
"msg": "mongodb.local"
If the playbook will be run on the host:
foo.example.com
you can get "type = news" simply by specifying "{{type}}". If you want to use in "when" conditions, then simply indicating "type"
If the playbook will be run on the host:
mongodb.local
then the value for "type" in this case will automatically be = "mongo", and "region" will automatically be = "us"
The values of the variables, if they are defined in the hosts file as you specified, will automatically be determined on the specified hosts
Thus, the playbook can be executed on all hosts and if you get a value for "type", for example:
- debug:
     msg: "{{type}}"
On each of the hosts you will get your unique values that are defined in the hosts file
I'm not sure that I understood the question correctly, but if it meant that on the foo.example.com host it was necessary to get a list of servers from the "webservers" group that have "type = news", then the answer is already given.
Rather than re-inventing the wheel, I suggest you have a look at how ansible itsef is parsing ini files to turn them into an inventory object
You could also easily get this info in json format with a very simple playbook (as suggested by #vladimirbotka), or rewrite your inventory in yaml which would be much easier to parse with any external tool
inventory.yaml
---
all:
children:
webservers:
hosts:
foo.example.com:
type: news
bar.example.com:
type: sports
dbservers:
hosts:
mongodb.local:
type: mongo
region: us
mysql.local:
type: mysql
region: eu

Limit hosts using Workflow Template

I'm using Ansible AWX (Tower) and have a template workflow that executes several templates one after the other, based on if the previous execution was successful.
I noticed I can limit to a specific host when running a single template, I'd like to apply this to the a workflow and my guess is I would have to use the survey option to achieve this, however I'm not sure how.
I have tried to see if I can override the "hosts" value and that failed like I expected it to.
How can I go about having it ask me at the beginning of the workflow for the hostname/ip and not for every single template inside the workflow?
You have the set_stats option.
Let's suppose you have the following inventory:
10.100.1.1
10.100.1.3
10.100.1.6
Your inventory is called MyOfficeInventory. First rule is that you need this inventory across all your Templates to play with the host from the first one.
I want to ping only my 10.100.1.6 machine, so in the Template I choose MyOfficeInventory and limit to 10.100.1.6.
If we do:
---
- name: Ping
hosts: all
gather_facts: False
connection: local
tasks:
- name: Ping
ping:
We get:
TASK [Ping] ********************************************************************
ok: [10.100.10.6]
Cool! So from MyOfficeInventory I have my only host selected pinged. So now, in my workflow I have the next Template with *MyOfficeInventory** selected (This is the rule as said). If I ping, I will ping all of them unless you limit again so let's do the magic:
In your first Template do:
- name: add devices with connectivity to the "working_hosts" group
group_by:
key: working_hosts
- name: "Artifact URL of test results to Tower Workflows"
set_stats:
data:
myinventory: "{{ groups['working_hosts'] }}"
run_once: True
Be careful, because for your playbook,
groups['all']
means:
"groups['all']": [
"10.100.10.1",
"10.100.10.3",
"10.100.10.6"
]
And with your new working_hosts group, you get only your current host:
"groups['working_hosts']": [
"10.100.10.6"
]
So now you have your brand new myinventory inventory.
Use it like this in the rest of your Playbooks assigned to your Templates:
- name: Ping
hosts: "{{ myinventory }}"
gather_facts: False
tasks:
- name: Ping
ping:
Your inventory variable will be transferred and you will get:
ok: [10.100.10.6]
One step further. Do you want to select your host from a Survey?
Create one with your hostname input and add keep your first Playbook as:
- name: Ping
hosts: "{{ mysurveyhost }}"
gather_facts: False

Ansible returns wrong hosts in dynamic inventory (private ip collision?)

I have two instances on different VPCs which have the same private address.
ci-vpc:
172.18.50.180:
tags:
Environment: ci
Role: aRole
test-vpc:
172.18.50.180:
tags:
Environment: test
Role: web
I am running the following playbook:
- name: "print account specific variables"
hosts: "tag_Environment_ci:&tag_Role_web"
tasks:
- name: "print account specific variables for account {{ account }}"
debug:
msg:
- 'ec2_tag_Name': "{{ ec2_tag_Name }}"
'ec2_tag_Role': "{{ ec2_tag_Role }}"
'ec2_private_ip_address': "{{ ec2_private_ip_address }}"
'ec2_tag_Environment': "{{ ec2_tag_Environment }}"
Since I am asking for both role web and environment ci, none of these instances should be picked, but nevertheless the result that I am getting is:
ok: [172.18.50.180] => {
"changed": false,
"msg": [
{
"ec2_private_ip_address": "172.18.50.180",
"ec2_tag_Environment": "test",
"ec2_tag_Name": "test-web-1",
"ec2_tag_Role": "web"
}
]
}
Obviously this instance does not meet the requirements under hosts...
It seems like ec2.py searched for the Environment tag, found ci for 172.18.50.180, then searched separately for the role tag, found another one under 172.18.50.180, and just marked that instance as ok, even though these are two different instances on different vpcs.
I've tried changing vpc_destination_variable in ec2.ini to id but then I'm getting error when Ansible is trying to connect to these instances because it cannot connect to the id...
fatal: [i-XXX]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname i-XXX: Name or service not known\r\n", "unreachable": true
}
Is there another option that will work under vpc_destination_variable? Any known solution for such a collision?
tl;dr: This is exactly what hostname_variable in ec2.ini is for, as documented:
# This allows you to override the inventory_name with an ec2 variable, instead
# of using the destination_variable above. Addressing (aka ansible_ssh_host)
# will still use destination_variable. Tags should be written as 'tag_TAGNAME'.
Unfortunetely I've missed it and found it after looking around in ec2.py
Longer answer with additional options to hostnames
After finding out about hostname_variable, I had another problem that it can receive only one variable. In my case I had some instances with the same private ip on one hand, and some with the same tags on the other (AWS autoscaling groups, same tags on all hosts), so I needed a way to differentiate between them.
I've created a gist with this option. My change is in line 848. This allows you to use multiple comma separated variables in hostname_variable, e.g.:
hostname_variable = tag_Name,private_ip_address