Ansible AWS create and manipulate EC2 - amazon-web-services

Hello i use this playbook for create EC2 instance
i use this documentation to make my playbook:
https://docs.ansible.com/ansible/latest/collections/amazon/aws/ec2_module.html#parameter-instance_tags
###Example pour launch une instance###
###ansible-playbook --extra-vars "hosts=ansible-test"###
- hosts: localhost
##Init variable##
vars:
keypair: "AWS-KEYS-WEBSERVER001"
instance_type: t2.micro
hosts: "{{ hosts }}"
groups: "Web"
image: "ami-0ea4a063871686f37"
tasks:
- name: startup new instance
community.aws.ec2_instance:
key_name: "{{ keypair }}"
security_group: "Web"
instance_type: "{{ instance_type }}"
name: "fermela"
image_id: "{{ image }}"
wait: true
region: "eu-west-3"
network:
assign_public_ip: true
vpc_subnet_id: "subnet-0c82e6027833af6cc"
register: ec2
- debug :
var: ec2
my playbook works and the debug give me output like :
ok: [localhost] => {
"ec2": {
"changed": false,
"changes": [],
"failed": false,
"instance_ids": [
"i-0ecb077aafd7fda1c"
],
"instances": [
{
"ami_launch_index": 0,
"architecture": "x86_64",
"block_device_mappings": [
{
"device_name": "/dev/xvda",
"ebs": {
"attach_time": "2021-02-23T00:34:53+00:00",
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-0ad6a503c6cfa7f97"
}
}
],
I would like manipulate this output for only dispaly ip_public
Can someone help me plz ?
I already try debug with var: ec2.public_ip but doesn't work

ec2_instance returns a list of instances, in this case you have just one instance. Try as below:
- debug :
var: ec2.instacnes[0].public_ip_address

You can't do this as in your output there is no public ip address.
Try this
use ansible_facts to register
- debug :
ansible_facts['ansible_ec2_public_ipv4']
Only those variables can be accessed that are shown in debug after registering.

Related

Ansible and AWS Subnets

I am relatively new to working with Ansible Core / Tower and I am at a complete loss what is causing the following issues. I have spent the past two days reading everything I could find on the topic and I am still stuck, looking for help.
Here is what I currently have setup (among other Ansible playbooks, roles, and tasks to create brand new VPC).
Below are the tasks that I am using to create a set of new subnets, one per availability zone, and get the results back from what is created. These tasks all works perfectly as verified through the AWS Console.
### Create the Internet-facing DMZ subnets ###
- name: Create Subnet(s) in VPC - DMZ
ec2_vpc_subnet:
state: present
vpc_id: "{{ new_vpc_info['vpcs'][0]['id'] }}"
region: "{{ vpc_region }}"
az: "{{ item.az }}"
cidr: "{{ item.subnet }}"
resource_tags:
Name: "{{ item.name }}"
Role: "{{ role_tag }}"
Team: "{{ team_tag }}"
Product Area: "{{ product_area_tag }}"
Portfolio: "{{ portfolio_tag }}"
with_items: "{{ dmz_subnet_az }}"
- name: Get Sunbet Info - DMZ
ec2_vpc_subnet_facts:
region: "{{ vpc_region }}"
filters:
"tag:Name": "{{ item.name }}"
with_items: "{{ dmz_subnet_az }}"
register: new_dmz_subnets
- debug:
var=new_dmz_subnets
The output of the "debug" command is provided below, truncated to remove the rest of the subnets and redacted so I do not get yelled at, which matches up to what is in the AWS Console.
{
"changed": false,
"_ansible_verbose_always": true,
"new_dmz_subnets": {
"msg": "All items completed",
"changed": false,
"results": [
{
"_ansible_parsed": true,
"subnets": [
{
"tags": {
"Product Area": "Engineering Tools",
"Portfolio": "Shared Platform and Operations",
"Role": "splunk-proof-of-concept",
"Name": "DMZ_Subnet_A",
"Team": "Engineering Tools"
},
"subnet_id": "subnet-XXXX",
"assign_ipv6_address_on_creation": false,
"default_for_az": false,
"state": "available",
"ipv6_cidr_block_association_set": [],
"availability_zone": "us-east-1a",
"vpc_id": "vpc-XXXX",
"cidr_block": "x.x.x.x/24",
"available_ip_address_count": 251,
"id": "subnet-XXXX",
"map_public_ip_on_launch": false
}
],
"changed": false,
"_ansible_item_label": {
"subnet": "x.x.x.x/24",
"az": "us-east-1a",
"name": "DMZ_Subnet_A"
},
"item": {
"subnet": "x.x.x.x/24",
"az": "us-east-1a",
"name": "DMZ_Subnet_A"
},
"_ansible_item_result": true,
"failed": false,
"invocation": {
"module_args": {
"profile": null,
"aws_secret_key": null,
"aws_access_key": null,
"security_token": null,
"region": "us-east-1",
"filters": {
"tag:Name": "DMZ_Subnet_A"
},
"ec2_url": null,
"subnet_ids": [],
"validate_certs": true
}
},
"_ansible_ignore_errors": null,
"_ansible_no_log": false
},
{
"_ansible_parsed": true,
"subnets": [
{
"tags": {
"Product Area": "Engineering Tools",
"Portfolio": "Shared Platform and Operations",
"Role": "splunk-proof-of-concept",
"Name": "DMZ_Subnet_B",
"Team": "Engineering Tools"
},
"subnet_id": "subnet-XXXX",
"assign_ipv6_address_on_creation": false,
"default_for_az": false,
"state": "available",
"ipv6_cidr_block_association_set": [],
"availability_zone": "us-east-1b",
"vpc_id": "vpc-XXXX",
"cidr_block": "x.x.x.x/24",
"available_ip_address_count": 251,
"id": "subnet-XXXX",
"map_public_ip_on_launch": false
}
],
"changed": false,
"_ansible_item_label": {
"subnet": "x.x.x.x/24",
"az": "us-east-1b",
"name": "DMZ_Subnet_B"
},
"item": {
"subnet": "x.x.x.x/24",
"az": "us-east-1b",
"name": "DMZ_Subnet_B"
},
"_ansible_item_result": true,
"failed": false,
"invocation": {
"module_args": {
"profile": null,
"aws_secret_key": null,
"aws_access_key": null,
"security_token": null,
"region": "us-east-1",
"filters": {
"tag:Name": "DMZ_Subnet_B"
},
"ec2_url": null,
"subnet_ids": [],
"validate_certs": true
}
},
"_ansible_ignore_errors": null,
"_ansible_no_log": false
},
......
}
]
},
"_ansible_no_log": false
}
Now onto the tasks that I am having issues getting working, below is my most recent attempt, which may be completely in left field due to me trying everything I found to get it working. I am attempting to get a list of the "subnet_id" from the registered "new_dmz_subnets" variable, then concatenating it with a "name" that is set in a vars file, and finally using that information to create a NAT Gateway within each of the subnets.
### Create the NAT Gateway in VPC ###
- name: Set DMZ Subnet facts
set_fact:
subnet_id_items:
subnet_id: '{{ item.subnets | map(attribute="subnet_id") | list }}'
with_items: "{{ new_dmz_subnets }}"
register: subnet_id_list
- name: Set Name and DMZ Subnet loop facts
set_fact:
name_subnet_items:
name: "{{ nat_gateway.name }}"
subnet_id: "{{ item.subnet_id }}"
loop: "{{ subnet_id_list }}"
register: name_subnet_list
- debug:
var=name_subnet_list
- name: Create NAT Gateway, allocate new EIP, in VPC
ec2_vpc_nat_gateway:
state: present
subnet_id: "{{ item.subnet_id }}"
region: "{{ vpc_region }}"
wait: yes
if_exist_do_not_create: true
tags:
Name: "{{ item.name }}"
Role: "{{ role_tag }}"
Team: "{{ team_tag }}"
Product Area: "{{ product_area_tag }}"
Portfolio: "{{ portfolio_tag }}"
with_items: "{{ name_subnet_list }}"
register: new_nat_gateway
- debug:
var=new_nat_gateway
When I ran this setup, I got the following fatal error message, which is pretty much the same across every variation I have attempted.
12:55:15
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'subnets'\n\nThe error appears to have been in '/var/lib/awx/projects/_6__erik_andresen_git/ansible/splunk_poc_playbook/roles/create_networking_role/tasks/create_gateways_task.yml': line 21, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n### Starting working on this Task ###\n- name: Set DMZ Subnet facts\n ^ here\n"
}
Please let me know if I can provide any additional details and thanks in advance for the help!!!
-- Erik
I came across a setup that actually works! It may not be the best way to do it and I am still open to suggestions, but it at least works.
Here is code of the "ec2_vpc_subnet" module and collecting the Subnet IDs for later use in the playbook.
### Create the Internet-facing DMZ subnets ###
- name: Create Subnet(s) in VPC - DMZ
ec2_vpc_subnet:
state: present
vpc_id: "{{ vpc_id }}"
region: "{{ vpc_region }}"
az: "{{ item.az }}"
cidr: "{{ item.subnet }}"
resource_tags:
Name: "{{ item.name }}"
Role: "{{ role_tag }}"
Team: "{{ team_tag }}"
Product Area: "{{ product_area_tag }}"
Portfolio: "{{ portfolio_tag }}"
Created By: "{{ created_by }}"
with_items: "{{ dmz_subnet_az }}"
register: new_dmz_subnets
- name: Set facts for Subnet - DMZ
set_fact:
subnet_dmz_id: "{{ subnet_dmz_id | default({}) | combine({ item.subnet.tags.Name: item.subnet.id }) }}"
loop: "{{ new_dmz_subnets.results }}"
- debug:
var=subnet_dmz_id
And here is the use of the Subnet IDs in the "ec2_vpc_nat_gateway" module to create a NAT Gateway within each Availability Zone.
### Create the NAT Gateway in VPC ###
- name: Create NAT Gateway, allocate new EIP, in VPC
ec2_vpc_nat_gateway:
state: present
# NAT Gateways being deployed in DMZ subnets
subnet_id: "{{ subnet_dmz_id[item.subnet_name] }}"
region: "{{ vpc_region }}"
wait: yes
if_exist_do_not_create: true
# Tags not supported in the "ec2_vpc_nat_gateway" module
# https://github.com/ansible/ansible/issues/44339
#tags:
# Name: "{{ item.name }}"
# Role: "{{ role_tag }}"
# Team: "{{ team_tag }}"
# Product Area: "{{ product_area_tag }}"
# Portfolio: "{{ portfolio_tag }}"
# Created By: "{{ created_by }}"
with_items: "{{ nat_gateway }}"
register: new_nat_gateway
- debug:
var=new_nat_gateway

AWS AMI Cleanup w/Ansible iterate through results array

I have a previous task that creates weekly backups, labeling them with the server name followed by a date/time tag. The goal of this job is to go in behind it and clean up the old AMI backups, leaving only the last 3. The ec2_ami_find task works fine, but it could also return empty results for some servers and I'd like the deregister task to handle that.
The error I'm getting is pretty generic:
fatal: [127.0.0.1]: FAILED! => {
"failed": true,
"msg": "The conditional check 'item.ec2_ami_find.exists' failed. The error was: error while evaluating conditional
(item.ec2_ami_find.exists): 'dict object' has no attribute
'ec2_ami_find'\n\nThe error appears to have been in
'/root/ansible/ec2-backups-purge/roles/first_acct/tasks/main.yml': line 25,
column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n\n- name:
Deregister old backups\n ^ here\n"
The playbook task reads as follows:
---
- name: Find old backups
tags: always
ec2_ami_find:
owner: self
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
region: "{{ aws_region }}"
ami_tags:
Name: "{{ item }}-weekly-*"
sort: name
sort_order: descending
sort_start: 3
with_items:
- server-01
- server-02
- server-win-01
- downloads
register: stale_amis
- name: Deregister old backups
tags: always
ec2_ami:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
region: "{{ aws_region }}"
image_id: "{{ item.ami_id }}"
delete_snapshot: True
state: absent
with_items:
- "{{ stale_amis.results }}"
Snippet of one of the results returns:
"results": [
{
"ami_id": "ami-zzzzzzz",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/xvda": {
"delete_on_termination": true,
"encrypted": false,
"size": 200,
"snapshot_id": "snap-xxxxxxxxxxxxx",
"volume_type": "gp2"
}
},
"creationDate": "2017-08-01T15:26:11.000Z",
"description": "Weekly backup via Ansible",
"hypervisor": "xen",
"is_public": false,
"location": "111111111111/server-01.example.com-20170801152611Z",
"name": "server-01.example.com-20170801152611Z",
"owner_id": "111111111111",
"platform": null,
"root_device_name": "/dev/xvda",
"root_device_type": "ebs",
"state": "available",
"tags": {
"Name": "server-01-weekly-20170801152611Z",
"Type": "weekly"
},
"virtualization_type": "hvm"
},
I doubt your attempt:
with_items:
- "{{ stale_amis.results }}"
because ec2_ami_find put results into own results field. So the first AMI for first server will be stale_amis.results[0].results[0].ami_id
I advice to reduce original stale_amis to required list and loop over it. For example you can use json_query filter:
- ec2_ami:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
region: "{{ aws_region }}"
image_id: "{{ item }}"
delete_snapshot: True
state: absent
with_items: "{{ stale_amis | json_query('results[].results[].ami_id') }}"

Ansible instance not appearing on AWS console

So I'm using Ansible on my MBP to try create key_pair and create/provision EC2 instances. Playbook runs fine with no error but when I check AWS console there is no new key and no new instance... Ping to supposedly created Public IP times out so I am thinking something failed. Ansible definitely hit AWS since if I disable the AWS access key then Ansible errors out, and not using the Ansible created key in the second task also fails so a key must have been created, just not uploaded to AWS?
Can you spot anything I did wrong?
Playbook yaml content:
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
instance_type: t2.micro
image: ami-d1315fb1
region: us-west-1
tasks:
- name: Generate key
ec2_key:
name: ansible_key
region: "{{ region }}"
aws_access_key: #my_key
aws_secret_key: #my_key
state: present
- name: Launch instance
ec2:
key_name: ansible_key
group: default
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
aws_access_key: #my_key
aws_secret_key: #my_key
register: ec2
- name: Print all ec2 variables
debug: var=ec2
Playbook runs fine with output being:
PLAY [Create a sandbox instance] ***********************************************
TASK [Generate key] ************************************************************
ok: [localhost]
TASK [Launch instance] *********************************************************
changed: [localhost]
TASK [Print all ec2 variables] *************************************************
ok: [localhost] => {
"ec2": {
"changed": true,
"instance_ids": [
"i-0898f09f8d3798961"
],
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/sda1": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-04e9c4c4f5d85e60d"
}
},
"dns_name": "ec2-54-215-253-115.us-west1.compute.amazonaws.com",
"ebs_optimized": false,
"groups": {
"sg-778b5711": "default"
},
"hypervisor": "xen",
"id": "i-0898f09f8d3798961",
"image_id": "ami-d1315fb1",
"instance_type": "t2.micro",
"kernel": null,
"key_name": "ansible_key",
"launch_time": "2017-08-16T16:57:09.000Z",
"placement": "us-west-1b",
"private_dns_name": "ip-172-31-29-166.us-west1.compute.internal",
"private_ip": "172.31.29.166",
"public_dns_name": "ec2-54-215-253-115.us-west1.compute.amazonaws.com",
"public_ip": "54.215.253.115",
"ramdisk": null,
"region": "us-west-1",
"root_device_name": "/dev/sda1",
"root_device_type": "ebs",
"state": "running",
"state_code": 16,
"tags": {},
"tenancy": "default",
"virtualization_type": "hvm"
}
],
"tagged_instances": []
}
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
Here are the few things:
- sure, you have selected the N.California(us-west-1) region from the console
- For private part of the key to store inside the .ssh under your username, do the following steps:
- name: Create an EC2 key
ec2_key:
name: "ansible_key"
region: "us-west-1"
aws_access_key: #my_key
aws_secret_key: #my_ke
register: ec2_key
- name: save private key
copy:
content: "{{ ec2_key.key.private_key }}"
dest: "/Users/{{lookup('env', 'USER')}}/.ssh/aws-private.pem"
mode: 0600
when: ec2_key.changed
Note: Run this playbook from the scratch to create new key and save it into your ~/.ssh directory.

Creating n new instances in AWS EC2 VPC and then configuring them

I'm having a really hard time doing what seems like a fairly standard task so I'm hoping somebody can help me. I've googled this like crazy and most of the examples are not in VPC or use deprecated structure that makes them wrong or unusable in my use case.
Here are my goals:
I want to launch a whole mess of new instances in my VPC (the same
code below has 3 but it could be a hundred)
I want to wait for thoseinstances to come alive
I then want to configure those instances (ssh into them, change
hostname, enable some services, etc. etc.)
Now I could probably do this in 2 tasks. I could create the instances in 1 playbook. Wait for them to settle down. Then run a 2nd playbook to configure them. That's probably what I'm going to do now because I want to get moving - but there has to be a one shot answer to this.
Here's what I have so far for a playbook
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 delay=60 timeout=900 state=started
with_items: '{{ec2.instances}}'
- name: Update hostname on instances
hostname: name={{ item.private_ip }}
with_items: '{{ec2.instances}}'
And that doens't work. What I get is
TASK [Wait for SSH to come up] *************************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
TASK [Update hostname on instances] ********************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
Which makes me sad. Now this is my latest incarnation of that playbook. But I've tried to rewrite it using every example I can find on the internet. Most of them have with_items written in a different way, but ansible tells me that way is depricated, and then fails.
So far ansible has been fun and easy, but this is making me want to toss my laptop across the street.
Any suggestions? Should I be using register and with_items at all? Would I be better off using something like this:
add_host: hostname={{item.public_ip}} groupname=deploy
instead? I'm wide open to a rewrite here. I'm going to go write this up in 2 playbooks and would love to get suggestions.
Thanks!
****EDIT****
Now it's just starting to feel broken or seriously changed. I've googled dozens of examples and they all are written the same way and they all fail with the same error. This is my simple playbook now:
---
- hosts: localhost
connection: local
gather_facts: False
vars:
builderstart: 93
builderend: 94
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: dakey
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: True
group_id: sg-OU812
exact_count: 1
count_tag:
Name: "{{ item }}"
instance_tags:
Name: "{{ item }}"
role: "dostuff"
extracheese: "True"
register: ec2
with_sequence: start="{{builderstart}}" end="{{builderend}}" format=builder%03d
- name: the newies
debug: msg="{{ item }}"
with_items: "{{ ec2.instances }}"
It really couldn't be more straight forward. No matter how I write it, no matter how I vary it, I get the same basic error:
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the
future this will be a fatal error.: 'dict object' has no attribute
'instances'.
So it looks like it's the with_items: "{{ ec2.instances }}" line that's causing the error.
I've used debug to print out ec2 and that error looks accurate. It looks like the structure changed to me. It looks like ec2 now contains a dictionary with results as a key to another dictionary object and that instances is a key in that dictionary. But I can't find a sane way to access the data.
For what it's worth, I've tried accessing this in 2.0.1, 2.0.2, and 2.2 and I get the same problem in every case.
Are the rest of you using 1.9 or something? I can't find an example anywhere that works. It's very frustrating.
Thanks again for any help.
Don't do it like this:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
Because by using it you flushing all info from ec2 in your item.
You receiving following output:
TASK [Launch instance] *********************************************************
changed: [localhost] => (item={u'hostname': u'eggroll1'})
changed: [localhost] => (item={u'hostname': u'eggroll2'})
but item should be like this:
changed: [localhost] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-29-85.ec2.internal', u'public_ip': u'54.208.138.217', u'private_ip': u'172.31.29.85', u'id': u'i-003b63636e7ffc27c', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-37581295'}}, u'key_name': u'eggfooyong', u'image_id': u'ami-fce3c696', u'tenancy': u'default', u'groups': {u'sg-aabbcc34': u'ssh'}, u'public_dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'state_code': 16, u'tags': {u'ansibleowned': u'True', u'role': u'supper'}, u'placement': u'us-east-1d', u'ami_launch_index': u'1', u'dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'region': u'us-east-1', u'launch_time': u'2016-04-19T08:19:16.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'})
Try to use following code
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
keypair: eggfooyong
instance_type: t2.micro
security_group: ssh
image: ami-8675309
region: us-east-1
subnet: subnet-8675309
instance_names:
- eggroll1
- eggroll2
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: no
count: "{{ instance_names | length }}"
register: ec2
- name: tag instances
ec2_tag:
resource: '{{ item.0.id }}'
region: '{{ region }}'
tags:
Name: '{{ item.1 }}'
role: "supper"
ansibleowned: "True"
with_together:
- '{{ ec2.instances }}'
- '{{ instance_names }}'
- name: Wait for SSH to come up
wait_for: host={{ private_ip }} port=22 delay=60 timeout=320 state=started
with_items: '{{ ec2.instances }}'
Assumption that your ansible host located inside of VPC
To achieve this goal, I have written a really small filter plugin get_ec2_info.
Create a directory with the named filter_plugins
Create a plugin file get_ec2_info.py with the following content:
from jinja2.utils import soft_unicode
class FilterModule(object):
def filters(self):
return {
'get_ec2_info': get_ec2_info,
}
def get_ec2_info(list, ec2_key):
ec2_info = []
for item in list:
for ec2 in item['instances']:
ec2_info.append(ec2[ec2_key])
return ec2_info
Then you can use this in your playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
- name: Create SSH Group to login dynamically to EC2 Instance(s)
add_host:
hostname: "{{ item }}"
groupname: my_ec2_servers
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
- name: Wait for SSH to come up on EC2 Instance(s)
wait_for:
host: "{{ item }}"
port: 22
state: started
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
# CALL THE DYNAMIC GROUP IN THE SAME PLAYBOOK
- hosts: my_ec2_servers
become: yes
remote_user: ubuntu
gather_facts: yes
tasks:
- name: DO YOUR TASKS HERE
EXTRA INFORMAITON:
using ansible 2.0.1.0
assuming you are spinning up ubuntu instances, if not then change the value in remote_user: ubuntu
assuming ssh key is properly configured
Please consult these github repos for more help:
ansible-aws-role-1
ansible-aws-role-2
I thinks this would be helpful for debug.
https://www.middlewareinventory.com/blog/ansible-dict-object-has-no-attribute-stdout-or-stderr-how-to-resolve/
The ec2 register is a dict type. And it has a key results.
results key has many elements including dict and list like below:
{
"msg": {
"results": [
{
"invocation": {
},
"instances": [],
"changed": false,
"tagged_instances": [
{
}
],
"instance_ids": null,
"failed": false,
"item": [
],
"ansible_loop_var": "item"
}
],
"msg": "All items completed",
"changed": false
},
"_ansible_verbose_always": true,
"_ansible_no_log": false,
"changed": false
}
So, you can get the desired data using ., for instance, item.changed which has false boolean value.
- debug:
msg: "{{ item.changed }}"
loop: "{{ ec2.results }}"

ansible add host to route53

i am using ansible to provision servers on ec2, after creating the server i would like to create a host entry on route53 zone
---
- hosts: all
connection: local
tasks:
- name: create ec2 instance
action:
module: ec2
zone: "{{ zone }}"
image: "{{ image }}"
instance_type: "{{instance_type}}"
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
group: "{{ security_group }}"
key_name: "{{ sshkey }}"
instance_tags:
Name: "{{inventory_hostname}}"
Environment: "{{ Environment }}"
Date: "{{ Date}}"
Noderole: "{{ NodeRole }}"
ConfigurationGroup: "{{ ConfigurationGroup}}"
Backups: "{{ Backups }}"
count_tag:
Name: "{{inventory_hostname}}"
exact_count: 1
- name: Ensure DNS entry exists
action:
module: route53
command: create
overwrite: "yes"
record: "{{ inventory_hostname }}.{{ server_zone }}"
type: A
zone: "{{ server_zone }}"
value: "{{ item.private_ip }}"
with_items: "ec2.instances"
the attributes, "inventory_hostname" , "server_zone" are defined in the inventory files for the hosts so they work when the EC2 instance is created.
[kshk:~/testing/ansible-ec2] master* ± ansible-playbook -i inventory/development/devcm_q/inventory.ini create-ec2-instance.yml --limit dcm-jmp-09 -v
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [dcm-jmp-09]
TASK: [create ec2 instance] ***************************************************
changed: [dcm-jmp-09] => {"changed": true, "instance_ids": ["i-7c9e89f1"], "instances": [{"ami_launch_index": "0", "architecture": "x86_64", "dns_name": "", "ebs_optimized": false, "groups": {"sg-0bf7d96f": "dev-jumpbox"}, "hypervisor": "xen", "id": "i-7c9e89f1", "image_id": "ami-33734044", "instance_type": "t2.micro", "kernel": null, "key_name": "bootstrap", "launch_time": "2016-02-21T04:28:38.000Z", "placement": "eu-west-1c", "private_dns_name": "ip-172-31-8-55.eu-west-1.compute.internal", "private_ip": "172.31.8.55", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "eu-west-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "pending", "state_code": 0, "tags": {}, "tenancy": "default", "virtualization_type": "hvm"}], "tagged_instances": [{"ami_launch_index": "0", "architecture": "x86_64", "dns_name": "", "ebs_optimized": false, "groups": {"sg-0bf7d96f": "dev-jumpbox"}, "hypervisor": "xen", "id": "i-7c9e89f1", "image_id": "ami-33734044", "instance_type": "t2.micro", "kernel": null, "key_name": "bootstrap", "launch_time": "2016-02-21T04:28:38.000Z", "placement": "eu-west-1c", "private_dns_name": "ip-172-31-8-55.eu-west-1.compute.internal", "private_ip": "172.31.8.55", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "eu-west-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "pending", "state_code": 0, "tags": {}, "tenancy": "default", "virtualization_type": "hvm"}]}
TASK: [Ensure DNS entry exists] ***********************************************
fatal: [dcm-jmp-09] => One or more undefined variables: 'unicode object' has no attribute 'private_ip'
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/kshk/create-ec2-instance.retry
dcm-jmp-09
however, when the playbook is run, it throws up the error "no attribute 'private_ip"
any ideas?
You are not registering ec2. How do you expect ec2.instances to contain private_ip?
- name: create ec2 instance
action:
module: ec2
zone: "{{ zone }}"
.....
exact_count: 1
register: ec2
- name: Ensure DNS entry exists
action:
module: route53
....
zone: "{{ server_zone }}"
value: {{ item.private_ip }}
with_items: ec2.instances