Ansible and AWS Subnets - amazon-web-services

I am relatively new to working with Ansible Core / Tower and I am at a complete loss what is causing the following issues. I have spent the past two days reading everything I could find on the topic and I am still stuck, looking for help.
Here is what I currently have setup (among other Ansible playbooks, roles, and tasks to create brand new VPC).
Below are the tasks that I am using to create a set of new subnets, one per availability zone, and get the results back from what is created. These tasks all works perfectly as verified through the AWS Console.
### Create the Internet-facing DMZ subnets ###
- name: Create Subnet(s) in VPC - DMZ
ec2_vpc_subnet:
state: present
vpc_id: "{{ new_vpc_info['vpcs'][0]['id'] }}"
region: "{{ vpc_region }}"
az: "{{ item.az }}"
cidr: "{{ item.subnet }}"
resource_tags:
Name: "{{ item.name }}"
Role: "{{ role_tag }}"
Team: "{{ team_tag }}"
Product Area: "{{ product_area_tag }}"
Portfolio: "{{ portfolio_tag }}"
with_items: "{{ dmz_subnet_az }}"
- name: Get Sunbet Info - DMZ
ec2_vpc_subnet_facts:
region: "{{ vpc_region }}"
filters:
"tag:Name": "{{ item.name }}"
with_items: "{{ dmz_subnet_az }}"
register: new_dmz_subnets
- debug:
var=new_dmz_subnets
The output of the "debug" command is provided below, truncated to remove the rest of the subnets and redacted so I do not get yelled at, which matches up to what is in the AWS Console.
{
"changed": false,
"_ansible_verbose_always": true,
"new_dmz_subnets": {
"msg": "All items completed",
"changed": false,
"results": [
{
"_ansible_parsed": true,
"subnets": [
{
"tags": {
"Product Area": "Engineering Tools",
"Portfolio": "Shared Platform and Operations",
"Role": "splunk-proof-of-concept",
"Name": "DMZ_Subnet_A",
"Team": "Engineering Tools"
},
"subnet_id": "subnet-XXXX",
"assign_ipv6_address_on_creation": false,
"default_for_az": false,
"state": "available",
"ipv6_cidr_block_association_set": [],
"availability_zone": "us-east-1a",
"vpc_id": "vpc-XXXX",
"cidr_block": "x.x.x.x/24",
"available_ip_address_count": 251,
"id": "subnet-XXXX",
"map_public_ip_on_launch": false
}
],
"changed": false,
"_ansible_item_label": {
"subnet": "x.x.x.x/24",
"az": "us-east-1a",
"name": "DMZ_Subnet_A"
},
"item": {
"subnet": "x.x.x.x/24",
"az": "us-east-1a",
"name": "DMZ_Subnet_A"
},
"_ansible_item_result": true,
"failed": false,
"invocation": {
"module_args": {
"profile": null,
"aws_secret_key": null,
"aws_access_key": null,
"security_token": null,
"region": "us-east-1",
"filters": {
"tag:Name": "DMZ_Subnet_A"
},
"ec2_url": null,
"subnet_ids": [],
"validate_certs": true
}
},
"_ansible_ignore_errors": null,
"_ansible_no_log": false
},
{
"_ansible_parsed": true,
"subnets": [
{
"tags": {
"Product Area": "Engineering Tools",
"Portfolio": "Shared Platform and Operations",
"Role": "splunk-proof-of-concept",
"Name": "DMZ_Subnet_B",
"Team": "Engineering Tools"
},
"subnet_id": "subnet-XXXX",
"assign_ipv6_address_on_creation": false,
"default_for_az": false,
"state": "available",
"ipv6_cidr_block_association_set": [],
"availability_zone": "us-east-1b",
"vpc_id": "vpc-XXXX",
"cidr_block": "x.x.x.x/24",
"available_ip_address_count": 251,
"id": "subnet-XXXX",
"map_public_ip_on_launch": false
}
],
"changed": false,
"_ansible_item_label": {
"subnet": "x.x.x.x/24",
"az": "us-east-1b",
"name": "DMZ_Subnet_B"
},
"item": {
"subnet": "x.x.x.x/24",
"az": "us-east-1b",
"name": "DMZ_Subnet_B"
},
"_ansible_item_result": true,
"failed": false,
"invocation": {
"module_args": {
"profile": null,
"aws_secret_key": null,
"aws_access_key": null,
"security_token": null,
"region": "us-east-1",
"filters": {
"tag:Name": "DMZ_Subnet_B"
},
"ec2_url": null,
"subnet_ids": [],
"validate_certs": true
}
},
"_ansible_ignore_errors": null,
"_ansible_no_log": false
},
......
}
]
},
"_ansible_no_log": false
}
Now onto the tasks that I am having issues getting working, below is my most recent attempt, which may be completely in left field due to me trying everything I found to get it working. I am attempting to get a list of the "subnet_id" from the registered "new_dmz_subnets" variable, then concatenating it with a "name" that is set in a vars file, and finally using that information to create a NAT Gateway within each of the subnets.
### Create the NAT Gateway in VPC ###
- name: Set DMZ Subnet facts
set_fact:
subnet_id_items:
subnet_id: '{{ item.subnets | map(attribute="subnet_id") | list }}'
with_items: "{{ new_dmz_subnets }}"
register: subnet_id_list
- name: Set Name and DMZ Subnet loop facts
set_fact:
name_subnet_items:
name: "{{ nat_gateway.name }}"
subnet_id: "{{ item.subnet_id }}"
loop: "{{ subnet_id_list }}"
register: name_subnet_list
- debug:
var=name_subnet_list
- name: Create NAT Gateway, allocate new EIP, in VPC
ec2_vpc_nat_gateway:
state: present
subnet_id: "{{ item.subnet_id }}"
region: "{{ vpc_region }}"
wait: yes
if_exist_do_not_create: true
tags:
Name: "{{ item.name }}"
Role: "{{ role_tag }}"
Team: "{{ team_tag }}"
Product Area: "{{ product_area_tag }}"
Portfolio: "{{ portfolio_tag }}"
with_items: "{{ name_subnet_list }}"
register: new_nat_gateway
- debug:
var=new_nat_gateway
When I ran this setup, I got the following fatal error message, which is pretty much the same across every variation I have attempted.
12:55:15
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'subnets'\n\nThe error appears to have been in '/var/lib/awx/projects/_6__erik_andresen_git/ansible/splunk_poc_playbook/roles/create_networking_role/tasks/create_gateways_task.yml': line 21, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n### Starting working on this Task ###\n- name: Set DMZ Subnet facts\n ^ here\n"
}
Please let me know if I can provide any additional details and thanks in advance for the help!!!
-- Erik

I came across a setup that actually works! It may not be the best way to do it and I am still open to suggestions, but it at least works.
Here is code of the "ec2_vpc_subnet" module and collecting the Subnet IDs for later use in the playbook.
### Create the Internet-facing DMZ subnets ###
- name: Create Subnet(s) in VPC - DMZ
ec2_vpc_subnet:
state: present
vpc_id: "{{ vpc_id }}"
region: "{{ vpc_region }}"
az: "{{ item.az }}"
cidr: "{{ item.subnet }}"
resource_tags:
Name: "{{ item.name }}"
Role: "{{ role_tag }}"
Team: "{{ team_tag }}"
Product Area: "{{ product_area_tag }}"
Portfolio: "{{ portfolio_tag }}"
Created By: "{{ created_by }}"
with_items: "{{ dmz_subnet_az }}"
register: new_dmz_subnets
- name: Set facts for Subnet - DMZ
set_fact:
subnet_dmz_id: "{{ subnet_dmz_id | default({}) | combine({ item.subnet.tags.Name: item.subnet.id }) }}"
loop: "{{ new_dmz_subnets.results }}"
- debug:
var=subnet_dmz_id
And here is the use of the Subnet IDs in the "ec2_vpc_nat_gateway" module to create a NAT Gateway within each Availability Zone.
### Create the NAT Gateway in VPC ###
- name: Create NAT Gateway, allocate new EIP, in VPC
ec2_vpc_nat_gateway:
state: present
# NAT Gateways being deployed in DMZ subnets
subnet_id: "{{ subnet_dmz_id[item.subnet_name] }}"
region: "{{ vpc_region }}"
wait: yes
if_exist_do_not_create: true
# Tags not supported in the "ec2_vpc_nat_gateway" module
# https://github.com/ansible/ansible/issues/44339
#tags:
# Name: "{{ item.name }}"
# Role: "{{ role_tag }}"
# Team: "{{ team_tag }}"
# Product Area: "{{ product_area_tag }}"
# Portfolio: "{{ portfolio_tag }}"
# Created By: "{{ created_by }}"
with_items: "{{ nat_gateway }}"
register: new_nat_gateway
- debug:
var=new_nat_gateway

Related

Ansible AWS create and manipulate EC2

Hello i use this playbook for create EC2 instance
i use this documentation to make my playbook:
https://docs.ansible.com/ansible/latest/collections/amazon/aws/ec2_module.html#parameter-instance_tags
###Example pour launch une instance###
###ansible-playbook --extra-vars "hosts=ansible-test"###
- hosts: localhost
##Init variable##
vars:
keypair: "AWS-KEYS-WEBSERVER001"
instance_type: t2.micro
hosts: "{{ hosts }}"
groups: "Web"
image: "ami-0ea4a063871686f37"
tasks:
- name: startup new instance
community.aws.ec2_instance:
key_name: "{{ keypair }}"
security_group: "Web"
instance_type: "{{ instance_type }}"
name: "fermela"
image_id: "{{ image }}"
wait: true
region: "eu-west-3"
network:
assign_public_ip: true
vpc_subnet_id: "subnet-0c82e6027833af6cc"
register: ec2
- debug :
var: ec2
my playbook works and the debug give me output like :
ok: [localhost] => {
"ec2": {
"changed": false,
"changes": [],
"failed": false,
"instance_ids": [
"i-0ecb077aafd7fda1c"
],
"instances": [
{
"ami_launch_index": 0,
"architecture": "x86_64",
"block_device_mappings": [
{
"device_name": "/dev/xvda",
"ebs": {
"attach_time": "2021-02-23T00:34:53+00:00",
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-0ad6a503c6cfa7f97"
}
}
],
I would like manipulate this output for only dispaly ip_public
Can someone help me plz ?
I already try debug with var: ec2.public_ip but doesn't work
ec2_instance returns a list of instances, in this case you have just one instance. Try as below:
- debug :
var: ec2.instacnes[0].public_ip_address
You can't do this as in your output there is no public ip address.
Try this
use ansible_facts to register
- debug :
ansible_facts['ansible_ec2_public_ipv4']
Only those variables can be accessed that are shown in debug after registering.

Add one element to a list of dictionary ansible / jinja

I'm trying to add a element to a list of dictionary.
_gitlab_runner_config:
server:
url: "https://gitlab.mydomain.com"
api_token: "XXXXXXXXXXXXXXXXXX"
registration_token: "YYYYYYYYYYYYYYYY"
global:
listen_address: ":9200"
concurent: 5
check_interval: 15
session_server:
listen_address: "0.0.0.0:8093"
advertise_address: "{{ ansible_fqdn }}:8093"
session_timeout: 600
runners:
- description: "Test runner 1"
token: ""
tags:
- test1
locked: False
active: False
run_untagged: False
access_level: "not_protected"
maximum_timeout: "3600"
executor: "docker"
executor_config:
tls_verify: false
image: "test-image"
pull_policy: "always"
volumes:
cpus:
In another task, i register the token value. Si, i want to set the value of gitlab_runner_config.runners.LIST_INDEX.token
I have try:
- name: "Save runner token"
set_fact:
_gitlab_runner_config: "{{ _gitlab_runner_config|combine({'runners': {runner_index: {'token': _gitlab_server_registered.runner.token}}} ) }}"
but it override the list.
_gitlab_runner_config.runners is a list. This implicates there might be more items on the list. If all items in the list shall be updated with the same token, e.g. mytoken, the play below does the job
vars:
mytoken: token000
tasks:
- set_fact:
config_updated: "{{ {'runners': _gitlab_runner_config.runners|
map('combine', {'token': mytoken})|
list} }}"
- set_fact:
_gitlab_runner_config: "{{ _gitlab_runner_config|
combine(config_updated) }}"
If there might be different tokens for each item of the list the list shall be updated in a loop. For example, given the list of tokens mytokens, the play below
vars:
mytokens:
- {'token': 'token000'}
- {'token': 'token001'}
- {'token': 'token002'}
tasks:
- set_fact:
runners: "{{ runners|default([]) +
[item|combine(mytokens[ansible_loop.index0])] }}"
loop: "{{ _gitlab_runner_config.runners }}"
loop_control:
extended: yes
- set_fact:
config_updated: "{{ {'runners': runners} }}"
- set_fact:
_gitlab_runner_config: "{{ _gitlab_runner_config|
combine(config_updated) }}"
- debug:
var: _gitlab_runner_config
gives
"_gitlab_runner_config": {
"global": {
"check_interval": 15,
"concurent": 5,
"listen_address": ":9200"
},
"runners": [
{
"access_level": "not_protected",
"active": false,
"description": "Test runner 1",
"executor": "docker",
"executor_config": {
"cpus": "",
"image": "test-image",
"pull_policy": "always",
"tls_verify": false,
"volumes": ""
},
"locked": false,
"maximum_timeout": "3600",
"run_untagged": false,
"tags": [
"test1"
],
"token": "token000"
}
],
"server": {
"api_token": "XXXXXXXXXXXXXXXXXX",
"registration_token": "YYYYYYYYYYYYYYYY",
"url": "https://gitlab.mydomain.com"
},
"session_server": {
"advertise_address": "srv.example.com:8093",
"listen_address": "0.0.0.0:8093",
"session_timeout": 600
}
}
Thanks for your help and answer #vladimir-botka.
But my problem is more complex but, my fault, i didn't give all the detail in my last post.
I have a dict gitlab_runner_config which contain list of runners _gitlab_runner_config.runners. I already loop in this list to register each runner, and i get a token in response (each runner will have a different token). I want to insert this token into teh field token. The whole dict gitlab_runner_config will be used for templating a config file.
The dict:
_gitlab_runner_config:
server:
url: "https://gitlab.mydomain.com"
api_token: "XXXXXXXXXXXXXXXXXX"
registration_token: "YYYYYYYYYYYYYYYY"
global:
listen_address: ":9200"
concurent: 5
check_interval: 15
session_server:
listen_address: "0.0.0.0:8093"
advertise_address: "{{ ansible_fqdn }}:8093"
session_timeout: 600
runners:
- description: "Test runner 1"
token: ""
tags:
- test1
locked: False
active: False
run_untagged: False
access_level: "not_protected"
maximum_timeout: "3600"
executor: "docker"
executor_config:
tls_verify: false
image: "test-image"
pull_policy: "always"
volumes:
cpus:
- description: "Test runner 2"
token: ""
tags:
- test2
locked: False
active: False
run_untagged: False
access_level: "not_protected"
maximum_timeout: "3600"
executor: "docker"
executor_config:
tls_verify: false
image: "test-image"
pull_policy: "always"
volumes:
cpus:
The tasks which register each runenr with a loop:
- name: "Registered runners"
include_tasks: register.yml
loop: "{{ _gitlab_runner_config.runners }}"
loop_control:
index_var: runner_index
register.yml:
- name: "Register runner on gitlab server"
gitlab_runner:
api_url: "{{ _gitlab_runner_config.server.url }}"
api_token: "{{ _gitlab_runner_config.server.api_token }}"
registration_token: "{{ _gitlab_runner_config.server.registration_token }}"
description: "[{{ ansible_fqdn }}] {{ item.description }}"
state: "present"
active: " {{ item.active }}"
tag_list: "{{ item.tags }}"
run_untagged: "{{ item.run_untagged }}"
maximum_timeout: "{{ item.maximum_timeout }}"
access_level: "{{ item.access_level }}"
locked: "{{ item.locked }}"
validate_certs: "no"
register: _gitlab_server_registered
- name: Debug
debug:
msg: "Token to merge for runner id: {{ runner_index }} : {{ gitlab_server_registered.runner.token }}"

AWS AMI Cleanup w/Ansible iterate through results array

I have a previous task that creates weekly backups, labeling them with the server name followed by a date/time tag. The goal of this job is to go in behind it and clean up the old AMI backups, leaving only the last 3. The ec2_ami_find task works fine, but it could also return empty results for some servers and I'd like the deregister task to handle that.
The error I'm getting is pretty generic:
fatal: [127.0.0.1]: FAILED! => {
"failed": true,
"msg": "The conditional check 'item.ec2_ami_find.exists' failed. The error was: error while evaluating conditional
(item.ec2_ami_find.exists): 'dict object' has no attribute
'ec2_ami_find'\n\nThe error appears to have been in
'/root/ansible/ec2-backups-purge/roles/first_acct/tasks/main.yml': line 25,
column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n\n- name:
Deregister old backups\n ^ here\n"
The playbook task reads as follows:
---
- name: Find old backups
tags: always
ec2_ami_find:
owner: self
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
region: "{{ aws_region }}"
ami_tags:
Name: "{{ item }}-weekly-*"
sort: name
sort_order: descending
sort_start: 3
with_items:
- server-01
- server-02
- server-win-01
- downloads
register: stale_amis
- name: Deregister old backups
tags: always
ec2_ami:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
region: "{{ aws_region }}"
image_id: "{{ item.ami_id }}"
delete_snapshot: True
state: absent
with_items:
- "{{ stale_amis.results }}"
Snippet of one of the results returns:
"results": [
{
"ami_id": "ami-zzzzzzz",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/xvda": {
"delete_on_termination": true,
"encrypted": false,
"size": 200,
"snapshot_id": "snap-xxxxxxxxxxxxx",
"volume_type": "gp2"
}
},
"creationDate": "2017-08-01T15:26:11.000Z",
"description": "Weekly backup via Ansible",
"hypervisor": "xen",
"is_public": false,
"location": "111111111111/server-01.example.com-20170801152611Z",
"name": "server-01.example.com-20170801152611Z",
"owner_id": "111111111111",
"platform": null,
"root_device_name": "/dev/xvda",
"root_device_type": "ebs",
"state": "available",
"tags": {
"Name": "server-01-weekly-20170801152611Z",
"Type": "weekly"
},
"virtualization_type": "hvm"
},
I doubt your attempt:
with_items:
- "{{ stale_amis.results }}"
because ec2_ami_find put results into own results field. So the first AMI for first server will be stale_amis.results[0].results[0].ami_id
I advice to reduce original stale_amis to required list and loop over it. For example you can use json_query filter:
- ec2_ami:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
region: "{{ aws_region }}"
image_id: "{{ item }}"
delete_snapshot: True
state: absent
with_items: "{{ stale_amis | json_query('results[].results[].ami_id') }}"

Ansible instance not appearing on AWS console

So I'm using Ansible on my MBP to try create key_pair and create/provision EC2 instances. Playbook runs fine with no error but when I check AWS console there is no new key and no new instance... Ping to supposedly created Public IP times out so I am thinking something failed. Ansible definitely hit AWS since if I disable the AWS access key then Ansible errors out, and not using the Ansible created key in the second task also fails so a key must have been created, just not uploaded to AWS?
Can you spot anything I did wrong?
Playbook yaml content:
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
instance_type: t2.micro
image: ami-d1315fb1
region: us-west-1
tasks:
- name: Generate key
ec2_key:
name: ansible_key
region: "{{ region }}"
aws_access_key: #my_key
aws_secret_key: #my_key
state: present
- name: Launch instance
ec2:
key_name: ansible_key
group: default
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
aws_access_key: #my_key
aws_secret_key: #my_key
register: ec2
- name: Print all ec2 variables
debug: var=ec2
Playbook runs fine with output being:
PLAY [Create a sandbox instance] ***********************************************
TASK [Generate key] ************************************************************
ok: [localhost]
TASK [Launch instance] *********************************************************
changed: [localhost]
TASK [Print all ec2 variables] *************************************************
ok: [localhost] => {
"ec2": {
"changed": true,
"instance_ids": [
"i-0898f09f8d3798961"
],
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/sda1": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-04e9c4c4f5d85e60d"
}
},
"dns_name": "ec2-54-215-253-115.us-west1.compute.amazonaws.com",
"ebs_optimized": false,
"groups": {
"sg-778b5711": "default"
},
"hypervisor": "xen",
"id": "i-0898f09f8d3798961",
"image_id": "ami-d1315fb1",
"instance_type": "t2.micro",
"kernel": null,
"key_name": "ansible_key",
"launch_time": "2017-08-16T16:57:09.000Z",
"placement": "us-west-1b",
"private_dns_name": "ip-172-31-29-166.us-west1.compute.internal",
"private_ip": "172.31.29.166",
"public_dns_name": "ec2-54-215-253-115.us-west1.compute.amazonaws.com",
"public_ip": "54.215.253.115",
"ramdisk": null,
"region": "us-west-1",
"root_device_name": "/dev/sda1",
"root_device_type": "ebs",
"state": "running",
"state_code": 16,
"tags": {},
"tenancy": "default",
"virtualization_type": "hvm"
}
],
"tagged_instances": []
}
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
Here are the few things:
- sure, you have selected the N.California(us-west-1) region from the console
- For private part of the key to store inside the .ssh under your username, do the following steps:
- name: Create an EC2 key
ec2_key:
name: "ansible_key"
region: "us-west-1"
aws_access_key: #my_key
aws_secret_key: #my_ke
register: ec2_key
- name: save private key
copy:
content: "{{ ec2_key.key.private_key }}"
dest: "/Users/{{lookup('env', 'USER')}}/.ssh/aws-private.pem"
mode: 0600
when: ec2_key.changed
Note: Run this playbook from the scratch to create new key and save it into your ~/.ssh directory.

ansible add host to route53

i am using ansible to provision servers on ec2, after creating the server i would like to create a host entry on route53 zone
---
- hosts: all
connection: local
tasks:
- name: create ec2 instance
action:
module: ec2
zone: "{{ zone }}"
image: "{{ image }}"
instance_type: "{{instance_type}}"
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
group: "{{ security_group }}"
key_name: "{{ sshkey }}"
instance_tags:
Name: "{{inventory_hostname}}"
Environment: "{{ Environment }}"
Date: "{{ Date}}"
Noderole: "{{ NodeRole }}"
ConfigurationGroup: "{{ ConfigurationGroup}}"
Backups: "{{ Backups }}"
count_tag:
Name: "{{inventory_hostname}}"
exact_count: 1
- name: Ensure DNS entry exists
action:
module: route53
command: create
overwrite: "yes"
record: "{{ inventory_hostname }}.{{ server_zone }}"
type: A
zone: "{{ server_zone }}"
value: "{{ item.private_ip }}"
with_items: "ec2.instances"
the attributes, "inventory_hostname" , "server_zone" are defined in the inventory files for the hosts so they work when the EC2 instance is created.
[kshk:~/testing/ansible-ec2] master* ± ansible-playbook -i inventory/development/devcm_q/inventory.ini create-ec2-instance.yml --limit dcm-jmp-09 -v
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [dcm-jmp-09]
TASK: [create ec2 instance] ***************************************************
changed: [dcm-jmp-09] => {"changed": true, "instance_ids": ["i-7c9e89f1"], "instances": [{"ami_launch_index": "0", "architecture": "x86_64", "dns_name": "", "ebs_optimized": false, "groups": {"sg-0bf7d96f": "dev-jumpbox"}, "hypervisor": "xen", "id": "i-7c9e89f1", "image_id": "ami-33734044", "instance_type": "t2.micro", "kernel": null, "key_name": "bootstrap", "launch_time": "2016-02-21T04:28:38.000Z", "placement": "eu-west-1c", "private_dns_name": "ip-172-31-8-55.eu-west-1.compute.internal", "private_ip": "172.31.8.55", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "eu-west-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "pending", "state_code": 0, "tags": {}, "tenancy": "default", "virtualization_type": "hvm"}], "tagged_instances": [{"ami_launch_index": "0", "architecture": "x86_64", "dns_name": "", "ebs_optimized": false, "groups": {"sg-0bf7d96f": "dev-jumpbox"}, "hypervisor": "xen", "id": "i-7c9e89f1", "image_id": "ami-33734044", "instance_type": "t2.micro", "kernel": null, "key_name": "bootstrap", "launch_time": "2016-02-21T04:28:38.000Z", "placement": "eu-west-1c", "private_dns_name": "ip-172-31-8-55.eu-west-1.compute.internal", "private_ip": "172.31.8.55", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "eu-west-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "pending", "state_code": 0, "tags": {}, "tenancy": "default", "virtualization_type": "hvm"}]}
TASK: [Ensure DNS entry exists] ***********************************************
fatal: [dcm-jmp-09] => One or more undefined variables: 'unicode object' has no attribute 'private_ip'
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/kshk/create-ec2-instance.retry
dcm-jmp-09
however, when the playbook is run, it throws up the error "no attribute 'private_ip"
any ideas?
You are not registering ec2. How do you expect ec2.instances to contain private_ip?
- name: create ec2 instance
action:
module: ec2
zone: "{{ zone }}"
.....
exact_count: 1
register: ec2
- name: Ensure DNS entry exists
action:
module: route53
....
zone: "{{ server_zone }}"
value: {{ item.private_ip }}
with_items: ec2.instances