Create EC2 instance by Ansible with aws credentials - amazon-web-services

I followed these 3 guides:
http://docs.ansible.com/ansible/guide_aws.html
http://docs.ansible.com/ansible/ec2_module.html
https://gist.github.com/tristanfisher/e5a306144a637dc739e7
and I wrote this Ansible play
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- include_vars: aws_credentials.yml
- name: Creating EC2 Ubuntu instance
ec2:
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
key_name: my-aws-key
zone: us-west-2a
vpc_subnet_id: subnet-04199d61
group_id: sg-cf6736aa
assign_public_ip: yes
count: 1
wait: true
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: 10
instance_tags:
Name: ansible-test
Project: test
Ansible: manageable
register: ec2
then I run ansible-playbook create-ec2.yml -v --private-key ~/.ssh/my-key --vault-password-file ~/.password/to_ansible_vault
and I was getting this message
PLAY [localhost] ***************************************************************
TASK [include_vars] ************************************************************
ok: [localhost] => {"ansible_facts": {"ec2_access_key": "decrypted_acces_key_XXXXX", "ec2_secret_key": "decrypted_secret_key_XXXXX"}, "changed": false}
TASK [Creating EC2 Ubuntu instance] ********************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'create-ec2.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
when I ran ansible-vault view aws_credentials.yml --vault-password-file ~/.password/to_ansible_vault I got readable content of encrypted aws_credentials.yml,
something like this :
---
ec2_access_key: "XXXXX"
ec2_secret_key: "XXXXX"
Also when I used plain aws_credentials.yml, it doesn't work. Only when I export my credentials, it works without any failure.
Could somebody help me, how can I write playbook for creating ec2 instance with credentials stored in encrypted file?

I think you should supply your keys directly to ec2 module in this case.
Try this:
- name: Creating EC2 Ubuntu instance
ec2:
aws_access_key: "{{ ec2_access_key }}"
aws_secret_key: "{{ ec2_secret_key }}"
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
...
The code suggests that it only checks module's arguments and environment variables, not host variables.

Also you can export your AWS API keys to OS environment variables, like a:
export AWS_ACCESS_KEY=XXXXXXX
In that case in Ansible scenario you need to set:
- name: Creating EC2 Ubuntu instance
ec2:
aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY') }}"
aws_secret_key: "{{ lookup('env', 'AWS_SECRET_KEY') }}"
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2

Related

Create and setup GCP VM's with ansible, ssh Permission denied (publickey)

Before executing the playbook i have created a service account and given it the permissions for "Compute admin", "OS Login admin" and "service account user". Then i downloaded the json key on my machine. The service account state is "active".
On my machine i wrote a playbook to set up one gcp VM and install apache and copy there a dummy webpage.
- name: Create Compute Engine instances
hosts: localhost
gather_facts: no
vars:
gcp_project: ansible-xxxxxx
gcp_cred_kind: serviceaccount
gcp_cred_file: ~/ansible-key.json
zone: "us-central1-a"
region: "us-central1"
machine_type: "n1-standard-1"
image: "projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts"
tasks:
- name: Create an IP address for instance
gcp_compute_address:
name: "{{ zone }}-ip"
region: "{{ region }}"
project: "{{ gcp_project }}"
service_account_file: "{{ gcp_cred_file }}"
auth_kind: "{{ gcp_cred_kind }}"
register: gce_ip
- name: Bring up the instance in the zone.
gcp_compute_instance:
name: "{{ zone }}"
machine_type: "{{ machine_type }}"
disks:
- auto_delete: true
boot: true
initialize_params:
source_image: "{{ image }}"
network_interfaces:
- access_configs:
- name: External NAT
nat_ip: "{{ gce_ip }}"
type: ONE_TO_ONE_NAT
tags:
items:
- http-server
- https-server
zone: "{{ zone }}"
project: "{{ gcp_project }}"
service_account_file: "{{ gcp_cred_file }}"
auth_kind: "{{ gcp_cred_kind }}"
register: gce
...after instantiating the VM i connect to it via ssh...
post_tasks:
- name: Wait for SSH for instance
wait_for: delay=5 sleep=5 host={{ gce_ip.address }} port=22 state=started timeout=100
- name: Save host data for first zone
add_host: hostname={{ gce_ip.address }} groupname=gce_instances_ips
the ansible-playbook never passes this step,
to call it i use ansible-playbook main.yaml --user sa_123456789 and
the given error is either a
fatal: [130.211.225.130]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: sa_104318085248975873144#130.211.225.130: Permission denied (publickey).", "unreachable": true}
or a simple timeout
fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 105, "msg": "Timeout when waiting for 130.211.225.130:22"}
In the metadata of GCE I also set enable-oslogin to TRUE.
The VM is created without any problem and is accessible by using the GCP console (GUI). If I try to access via ssh with keys generated privately the machine seems to be unreachable.
Does anyone have experience with this type of error?
This error usually occurs when there is no valid public and private key generated and setup.
Try any of the following approaches:
Create/edit your ansible.cfg file in your playbook directory and add a line for the full path of your key:
[defaults]
privatekeyfile = /Users/username/.ssh/private_key
It sets private key globally for all hosts in your playbook.
Add the private key to your playbook using the following line:
vars:
ansible_ssh_private_key_file: "/home/ansible/.ssh/id_rsa"
You can also define the private key to use directly in command line:
ansible-playbook -vvvv --private-key=/Users/you/.ssh/your_key playbookname.yml

AWS provision with ansible

I am getting and error when I want to provision an ec2. This is how i set up my environment.
I put my aws credentials in ~/.boto
cat /etc/ansible/hosts
[local]
localhost
cat /etc/ansible/ec2-vars/testserver.yml
ec2_keypair: "ansible"
ec2_security_group: "sg-*******"
ec2_instance_type: "t2.micro"
ec2_image: "ami-********"
ec2_subnet_ids: ['subnet-*******','subnet-REDACTED','subnet-REDACTED']
ec2_region: "us-east-1"
ec2_tag_Name: "testserver"
ec2_tag_Type: "testserver"
ec2_tag_Environment: "development"
ec2_volume_size: 8
cat /etc/ansible/provision-ec2.yml
---
- hosts: localhost
connection: local
gather_facts: false
user: root
pre_tasks:
- include_vars: ec2_vars/{{type}}.yml
roles:
- provision-ec2
cat /etc/ansible/roles/provision-ec2/tasks/main.yml
---
- name: Provision EC2 Box
local_action:
module: ec2
key_name: "{{ ec2_keypair }}"
group_id: "{{ ec2_security_group }}"
instance_type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
vpc_subnet_id: "{{ ec2_subnet_ids|random }}"
region: "{{ ec2_region }}"
instance_tags: '{"Name":"{{ec2_tag_Name}}","Type":" {{ec2_tag_Type}}","Environment":"{{ec2_tag_Environment}}"}'
assign_public_ip: yes
wait: true
count: 1
volumes:
- device_name: /dev/sda1
device_type: gp2
volume_size: "{{ ec2_volume_size }}"
delete_on_termination: true
register: ec2
- debug: var=item
with_items: ec2.instances
- add_host: name={{ item.public_ip }} >
groups=tag_Type_{{ec2_tag_Type}},tag_Environment_{{ec2_tag_Environment}}
ec2_region={{ec2_region}}
ec2_tag_Name={{ec2_tag_Name}}
ec2_tag_Type={{ec2_tag_Type}}
ec2_tag_Environment={{ec2_tag_Environment}}
ec2_ip_address={{item.public_ip}}
with_items: ec2.instances
- name: Wait for the instances to boot by checking the ssh port
wait_for: host={{item.public_ip}} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
Now I run the following command and this is what i get.
[root#ip-**-**-*** ansible]# ansible-playbook -vv -i localhost, -e "type=testservers" provision-ec2.yml
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: provision-ec2.yml ****************************************************
1 plays in provision-ec2.yml
PLAY [localhost] ***************************************************************
TASK [include_vars] ************************************************************
task path: /etc/ansible/provision-ec2.yml:7
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "file": "/etc/ansible/ec2_vars/testservers.yml", "msg": "Source file not found."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #provision-ec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
please help.
New error:
TASK [provision-ec2 : Provision EC2 Box] ***************************************
task path: /etc/ansible/roles/provision-ec2/tasks/main.yml:2
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #provision-ec2.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
You are mixing underscore and hyphen.
cat /etc/ansible/ec2-vars/testserver.yml
include_vars: ec2_vars/{{type}}.yml

Creating n new instances in AWS EC2 VPC and then configuring them

I'm having a really hard time doing what seems like a fairly standard task so I'm hoping somebody can help me. I've googled this like crazy and most of the examples are not in VPC or use deprecated structure that makes them wrong or unusable in my use case.
Here are my goals:
I want to launch a whole mess of new instances in my VPC (the same
code below has 3 but it could be a hundred)
I want to wait for thoseinstances to come alive
I then want to configure those instances (ssh into them, change
hostname, enable some services, etc. etc.)
Now I could probably do this in 2 tasks. I could create the instances in 1 playbook. Wait for them to settle down. Then run a 2nd playbook to configure them. That's probably what I'm going to do now because I want to get moving - but there has to be a one shot answer to this.
Here's what I have so far for a playbook
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 delay=60 timeout=900 state=started
with_items: '{{ec2.instances}}'
- name: Update hostname on instances
hostname: name={{ item.private_ip }}
with_items: '{{ec2.instances}}'
And that doens't work. What I get is
TASK [Wait for SSH to come up] *************************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
TASK [Update hostname on instances] ********************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
Which makes me sad. Now this is my latest incarnation of that playbook. But I've tried to rewrite it using every example I can find on the internet. Most of them have with_items written in a different way, but ansible tells me that way is depricated, and then fails.
So far ansible has been fun and easy, but this is making me want to toss my laptop across the street.
Any suggestions? Should I be using register and with_items at all? Would I be better off using something like this:
add_host: hostname={{item.public_ip}} groupname=deploy
instead? I'm wide open to a rewrite here. I'm going to go write this up in 2 playbooks and would love to get suggestions.
Thanks!
****EDIT****
Now it's just starting to feel broken or seriously changed. I've googled dozens of examples and they all are written the same way and they all fail with the same error. This is my simple playbook now:
---
- hosts: localhost
connection: local
gather_facts: False
vars:
builderstart: 93
builderend: 94
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: dakey
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: True
group_id: sg-OU812
exact_count: 1
count_tag:
Name: "{{ item }}"
instance_tags:
Name: "{{ item }}"
role: "dostuff"
extracheese: "True"
register: ec2
with_sequence: start="{{builderstart}}" end="{{builderend}}" format=builder%03d
- name: the newies
debug: msg="{{ item }}"
with_items: "{{ ec2.instances }}"
It really couldn't be more straight forward. No matter how I write it, no matter how I vary it, I get the same basic error:
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the
future this will be a fatal error.: 'dict object' has no attribute
'instances'.
So it looks like it's the with_items: "{{ ec2.instances }}" line that's causing the error.
I've used debug to print out ec2 and that error looks accurate. It looks like the structure changed to me. It looks like ec2 now contains a dictionary with results as a key to another dictionary object and that instances is a key in that dictionary. But I can't find a sane way to access the data.
For what it's worth, I've tried accessing this in 2.0.1, 2.0.2, and 2.2 and I get the same problem in every case.
Are the rest of you using 1.9 or something? I can't find an example anywhere that works. It's very frustrating.
Thanks again for any help.
Don't do it like this:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
Because by using it you flushing all info from ec2 in your item.
You receiving following output:
TASK [Launch instance] *********************************************************
changed: [localhost] => (item={u'hostname': u'eggroll1'})
changed: [localhost] => (item={u'hostname': u'eggroll2'})
but item should be like this:
changed: [localhost] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-29-85.ec2.internal', u'public_ip': u'54.208.138.217', u'private_ip': u'172.31.29.85', u'id': u'i-003b63636e7ffc27c', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-37581295'}}, u'key_name': u'eggfooyong', u'image_id': u'ami-fce3c696', u'tenancy': u'default', u'groups': {u'sg-aabbcc34': u'ssh'}, u'public_dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'state_code': 16, u'tags': {u'ansibleowned': u'True', u'role': u'supper'}, u'placement': u'us-east-1d', u'ami_launch_index': u'1', u'dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'region': u'us-east-1', u'launch_time': u'2016-04-19T08:19:16.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'})
Try to use following code
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
keypair: eggfooyong
instance_type: t2.micro
security_group: ssh
image: ami-8675309
region: us-east-1
subnet: subnet-8675309
instance_names:
- eggroll1
- eggroll2
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: no
count: "{{ instance_names | length }}"
register: ec2
- name: tag instances
ec2_tag:
resource: '{{ item.0.id }}'
region: '{{ region }}'
tags:
Name: '{{ item.1 }}'
role: "supper"
ansibleowned: "True"
with_together:
- '{{ ec2.instances }}'
- '{{ instance_names }}'
- name: Wait for SSH to come up
wait_for: host={{ private_ip }} port=22 delay=60 timeout=320 state=started
with_items: '{{ ec2.instances }}'
Assumption that your ansible host located inside of VPC
To achieve this goal, I have written a really small filter plugin get_ec2_info.
Create a directory with the named filter_plugins
Create a plugin file get_ec2_info.py with the following content:
from jinja2.utils import soft_unicode
class FilterModule(object):
def filters(self):
return {
'get_ec2_info': get_ec2_info,
}
def get_ec2_info(list, ec2_key):
ec2_info = []
for item in list:
for ec2 in item['instances']:
ec2_info.append(ec2[ec2_key])
return ec2_info
Then you can use this in your playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
- name: Create SSH Group to login dynamically to EC2 Instance(s)
add_host:
hostname: "{{ item }}"
groupname: my_ec2_servers
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
- name: Wait for SSH to come up on EC2 Instance(s)
wait_for:
host: "{{ item }}"
port: 22
state: started
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
# CALL THE DYNAMIC GROUP IN THE SAME PLAYBOOK
- hosts: my_ec2_servers
become: yes
remote_user: ubuntu
gather_facts: yes
tasks:
- name: DO YOUR TASKS HERE
EXTRA INFORMAITON:
using ansible 2.0.1.0
assuming you are spinning up ubuntu instances, if not then change the value in remote_user: ubuntu
assuming ssh key is properly configured
Please consult these github repos for more help:
ansible-aws-role-1
ansible-aws-role-2
I thinks this would be helpful for debug.
https://www.middlewareinventory.com/blog/ansible-dict-object-has-no-attribute-stdout-or-stderr-how-to-resolve/
The ec2 register is a dict type. And it has a key results.
results key has many elements including dict and list like below:
{
"msg": {
"results": [
{
"invocation": {
},
"instances": [],
"changed": false,
"tagged_instances": [
{
}
],
"instance_ids": null,
"failed": false,
"item": [
],
"ansible_loop_var": "item"
}
],
"msg": "All items completed",
"changed": false
},
"_ansible_verbose_always": true,
"_ansible_no_log": false,
"changed": false
}
So, you can get the desired data using ., for instance, item.changed which has false boolean value.
- debug:
msg: "{{ item.changed }}"
loop: "{{ ec2.results }}"

How to add and mount volumes for EC2 instance with Ansible

I am trying to learn the Ansible with all my AWS stuff. So the first task which I want to do is creation the basic EC2 instance with mounted volumes.
I wrote the Playbook according to Ansible docs, but it doesn't really work. My Playbook:
# The play operates on the local (Ansible control) machine.
- name: Create a basic EC2 instance v.1.1.0 2015-10-14
hosts: localhost
connection: local
gather_facts: false
# Vars.
vars:
hostname: Test_By_Ansible
keypair: MyKey
instance_type: t2.micro
security_group: my security group
image: ami-d05e75b8 # Ubuntu Server 14.04 LTS (HVM)
region: us-east-1 # US East (N. Virginia)
vpc_subnet_id: subnet-b387e763
sudo: True
locale: ru_RU.UTF-8
# Launch instance. Register the output.
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: yes
wait: true
wait_timeout: 500
count: 1 # number of instances to launch
instance_tags:
Name: "{{ hostname }}"
os: Ubuntu
type: WebService
register: ec2
# Create and attach a volumes.
- name: Create and attach a volumes
ec2_vol:
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
volume_size: 1 # in GB
volume_type: gp2
device_name: /dev/sdf
with_items: ec2.instances
register: ec2_vol
# Configure mount points.
- name: Configure mount points - mount device by name
mount: name=/system src=/dev/sda1 fstype=ext4 opts='defaults nofail 0 2' state=present
mount: name=/data src=/dev/xvdf fstype=ext4 opts='defaults nofail 0 2' state=present
But this Playbook crushes on volumes mount with error:
fatal: [localhost] => One or more undefined variables: 'item' is undefined
How can I resolve this?
You seem to have copy/pasted a lot of stuff all at once, and rather than needing a specific bit of information that SO can help you with, you need to go off and learn the basics of Ansible so you can think through all the individual bits that don't match up in this playbook.
Let's look at the specific error that you're hitting - item is undefined. It's triggered here:
# Create and attach a volumes.
- name: Create and attach a volumes
ec2_vol:
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
volume_size: 1 # in GB
volume_type: gp2
device_name: /dev/sdf
with_items: ec2.instances
register: ec2_vol
This task is meant to be looping through every item in a list, and in this case the list is ec2.instances. It isn't, because with_items should be de-indented so it sits level with register.
If you had a list of instances (which you don't, as far as I can see), it'd use the id for the for each one in that {{ item.id }} line... but then probably throw an error, because I don't think they'd all be allowed to have the same name.
Go forth and study, and you can figure out this kind of detail.

Ansible Dynamic Inventory fails to get the latest ec2 information

I am using ec2.py dynamic inventory for provisioning with ansible.
I have placed the ec2.py in /etc/ansible/hosts file and marked it executable.
I also have the ec2.ini file in /etc/ansible/hosts.
[ec2]
regions = us-west-2
regions_exclude = us-gov-west-1,cn-north-1
destination_variable = public_dns_name
vpc_destination_variable = ip_address
route53 = False
all_instances = True
all_rds_instances = False
cache_path = ~/.ansible/tmp
cache_max_age = 0
nested_groups = False
group_by_instance_id = True
group_by_region = True
group_by_availability_zone = True
group_by_ami_id = True
group_by_instance_type = True
group_by_key_pair = True
group_by_vpc_id = True
group_by_security_group = True
group_by_tag_keys = True
group_by_tag_none = True
group_by_route53_names = True
group_by_rds_engine = True
group_by_rds_parameter_group = True
Above is my ec2.ini file
---
- hosts: localhost
connection: local
gather_facts: yes
vars_files:
- ../group_vars/dev_vpc
- ../group_vars/dev_sg
- ../hosts_vars/ec2_info
vars:
instance_type: t2.micro
tasks:
- name: Provisioning EC2 instance
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: yes
group_id: ["{{ sg_npm }}", "{{sg_ssh}}"]
vpc_subnet_id: "{{ PublicSubnet }}"
source_dest_check: false
instance_tags: '{"Name": "EC2", "Environment": "Development"}'
register: ec2
- name: associate new EIP for the instance
local_action:
module: ec2_eip
region: "{{ region }}"
instance_id: "{{ item.id }}"
with_items: ec2.instances
- name: Waiting for NPM Server to come-up
local_action:
module: wait_for
host: "{{ ec2 }}"
state: started
delay: 5
timeout: 200
- include: ec2-configure.yml
Now the configuring script is as follows
- name: Configure EC2 server
hosts: tag_Name_EC2
user: ec2-user
sudo: True
gather_facts: True
tasks:
- name: Install nodejs related packages
yum: name={{ item }} enablerepo=epel state=present
with_items:
- nodejs
- npm
However when the configure script is called, the second script results into no hosts found.
If I execute the ec2-configure.yml just alone and if the EC2 server is up & running then it is able to find it and configure it.
I added the wait_for to make sure that the instance is in running state before the ec2-configure.yml is called.
Would appreciate if anyone can point my error. Thanks
After researching I came to know that the dynamic inventory doesnt refresh between playbook calls, it will only refresh if you are executing the playbook seprately.
However I was able to resolve the issue by using add_host command.
- name: Add Server to inventory
local_action: add_host hostname={{ item.public_ip }} groupname=webserver
with_items: webserver.instances
With ansible 2.0+, you refresh the dynamic inventory in the middle of the playbook as the task like this:
- meta: refresh_inventory
To extend this a bit, If you are getting problem with the cache in your playbook, then you can use it like this:
- name: Refresh the ec2.py cache
shell: "./inventory/ec2.py --refresh-cache"
changed_when: no
- name: Refresh inventory
meta: refresh_inventory
where ./inventory is the path to your dynamic inventory, please adjust it accordingly.
Hope this will help you.
Configure EC2 server play can't find any hosts from EC2 dynamic inventory because the new instance was added in the first play of the playbook - during the same execution. Group tag_Name_EC2 didn't exist in the inventory when the inventory was read and thus can't be found.
When you run the same playbook again Configure EC2 server should find the group.
We have used the following workaround to guide users in this kind of situations.
First, provision the instance:
tasks:
- name: Provisioning EC2 instance
local_action:
module: ec2
...
register: ec2
Then add a new play before ec2-configure.yml. The play uses ec2 variable that was registered in Provisioning EC2 instance and will fail and exit the playbook if any instances were launched:
- name: Stop and request a re-run if any instances were launched
hosts: localhost
gather_facts: no
tasks:
- name: Stop if instances were launched
fail: msg="Re-run the playbook to load group variables from EC2 dynamic inventory for the just launched instances!"
when: ec2.changed
- include: ec2-configure.yml
You can also refresh the cache:
ec2.py --refresh-cache
Or if your using as the Ansible host file:
/etc/ansible/hosts --refresh-cache