I see the Ansible EC2 Module's capability to provision / start / stop / terminate. However is there a way to lookup / query for the instance details like Private IP, Public IP etc.
I am looking at the use case to obtain the Public IP [not the Elastic IP] which keeps changing during stop/start and update the Route53 DNS records accordingly.
Any ideas ?
Did you set wait: True? It will wait for the instance to go to running state. I never had issues with the following. I was able to get the public IP after register. If you still have issues, use wait_for for IP to be available. Or post your script here.
- name: Start the instance if not running
ec2:
instance_ids: myinstanceid
region: us-east-1
state: running
wait: True
register: myinst
You can register the new boxes to an ec2 variable and wait for them to get private and public IPs, and then access them like so:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
register: ec2
- name: Add all instance public IPs to public group
add_host: hostname={{ item.public_ip }} groups=new_public_ips
with_items: ec2.instances
- name: Add all instance private IPs to private group
add_host: hostname={{ item.private_ip }} groups=new_private_ips
with_items: ec2.instances
Related
I am following the ansible documentation here: https://docs.ansible.com/ansible/latest/modules/ec2_eip_module.html in order to provision an ec2 instance with a new elastic IP. The parameter release_on_disassociation is set to yes but after the disassociation the elastic IP is not released.
First, I created the ec2 with the elastic IP:
- name: provision new instances with ec2
ec2:
keypair: mykey
instance_type: c1.medium
image: ami-40603AD1
wait: yes
group: webserver
count: 3
register: ec2
- name: associate new elastic IPs with each of the instances
ec2_eip:
device_id: "{{ item }}"
release_on_disassociation: yes
loop: "{{ ec2.instance_ids }}"
Afterwards, the elastic IP is disassociated:
- name: Gather EC2 facts
ec2_instance_facts:
region: "{{ region }}"
filters:
"tag:Type": "{{ server_type }}"
register: ec2
- name: disassociate an elastic IP with a device
ec2_eip:
device_id: '{{ item.instance_id }}'
ip: '{{ item.public_ip_address }}'
state: absent
when: item.public_ip_address is defined
with_items: "{{ ec2.instances }}"
ansible --version
ansible 2.8.4
Python Version is 3.7.4
I thought perhaps release_on_disassociation: was an AWS feature, but even if it is then it doesn't matter for your case because the module does not examine that parameter during state: present actions. Rather it only consults that parameter during state: absent
So I believe you need to move that parameter from the top ec2_eip down to the bottom one:
- name: disassociate an elastic IP with a device
ec2_eip:
device_id: '{{ item.instance_id }}'
ip: '{{ item.public_ip_address }}'
release_on_disassociation: yes
state: absent
when: item.public_ip_address is defined
with_items: "{{ ec2.instances }}"
I have some(ie. 3) existing elastic IPs created in AWS earlier. I am trying to provision 3 AWS ec2 instances and associate those IPs to those newly created instances. I need to use those exisiting elastic IPs as they are white listed with my external partner for payment processes. I am not sure how to do that. I have the playbook below to create the ec2:
- name: Provision a set of instances
ec2:
key_name: "{{ key_name }}"
group_id: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: true
exact_count: "{{ instances }}"
count_tag:
Name: Demo
instance_tags:
Name: "{{ application }}"
region: "{{ region }}"
register: ec2_instances
- name: Store EC2 instance IPs to provision
add_host:
hostname: "{{ item.public_ip }}"
groupname: ec2_instance_ips
with_items: "{{ ec2_instances.tagged_instances }}"
The second task is get the ready to configure the instances.
I just need to associate the EIP to those instances next.
Thanks,
Philip
Here you go, pulled from one of my roles.
- name: associate with our instance
ec2_eip:
reuse_existing_ip_allowed: true
instance_id: "{{ec2_instance}}"
public_ip: "{{eip}}"
state: present
region: "{{ec2_region|default('us-east-1')}}"
in_vpc: true
I am trying to spin a number of EC2 instance using Ansible in different availability zone and hence subnets, what i am confused here is how can i pass the right subnet corresponding to the right zone?
Assume i am passing my subnet variables as :
subnet_id_a: "subnet-9c3e38f8"
subnet_id_b: "subnet-88d171ff"
now these subnets are in different az's , i need to create some n number of instances which needs to be spun of in different AZ's
I am trying to use:
- name: Create ES Master Node instances
ec2:
key_name: "{{ aws_key_name }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami }}"
wait: yes
wait_timeout: 500
count: "{{ master_instance_count }}"
instance_tags:
Name: "{{ master_tag_name }}"
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: 100
vpc_subnet_id: "{{ subnet_id }}"
zone: "{{ aws_region }}{{ item.0 }}"
region: "{{ aws_region }}"
group: "{{ aws_sec_group_name }}"
with_items:
- [ 'a' , 'b']
register: ec2_details
But i am not sure how can pass the corresponding subnets according to this so that each instance gets spun in different az? please help
You can refactor variable like this:
subnet_ids:
a: subnet-9c3e38f8
b: subnet-88d171ff
And in your task:
...
vpc_subnet_id: "{{ subnet_ids[item] }}"
...
with_items: [a, b]
I suppose zone is not necessary, because subnet is already bound to some AZ.
And you don't need to nest your loop list like this - [a, b], use just [a, b] to avoid item.0.
I have written two roles with Ansible. The first role (i.e. provision) is executed locally on an instance that has the required IAMs to provision EC2 instances (see below):
- name: Provison "{{ count }}" ec2 instances in "{{ region }}"
ec2:
key_name: "{{ key_name }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
...
exact_count: "{{ count }}"
count_tag: "{{ count_tag }}"
instance_tags:
...
register: ec2
I then add the private IP address to hosts.
- name: Add the newly created EC2 instances to the local host file
local_action: lineinfile
dest="./hosts"
regexp={{ item.private_ip }}
insertafter="[sit]" line={{ item.private_ip }}
with_items: "{{ ec2.instances }}"
I wait for SSH to be available.
- name: Wait for SSH process to be available on "{{ sit }}"
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
The second role (i.e. setupEnv) sets up environmental variables on the 'sit' hosts such as users/group directories. I attempt to run the roles sequentially (see below main.yml playbook):
- hosts: local
roles:
connection: local
gather_facts: false
user: svc_ansible_lab
roles:
- provision
- hosts: sit
roles:
connection: ssh
gather_facts: true
user: ec2-user
roles:
- setupEnv
However, only the first role gets executed on local host. Ansible waits until SSH is available on the provisioned instances and then the process finishes without attmpting role setupEnv.
Is there a way I can make sure the second role is executed on the sit hosts after the SSH is available?
The inventory file will not be automatically re-sourced in between the plays.
Instead of modifying the inventory file, use add_host module and in-memory inventory.
- name: Add the newly created EC2 instances to the in-memory inventory
add_host:
hostname: "{{ item.private_ip }}"
groups: sit
with_items: "{{ ec2.instances }}"
Alternatively you might use the meta module with refresh_inventory parameter to force Ansible to re-read the inventory file:
- meta: refresh_inventory
I'm trying to get Ansible to bring up new ec2 boxes for me with a volume size larger than the default ~8g. I've added the volumes option with volume_size specified, but when I run with that, the volumes option seems to be ignored and I still get a new box with ~8g. The relevant part of my playbook is as follows:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
volumes:
- device_name: /dev/sda1
volume_size: 15
register: ec2
What am I doing wrong?
So, I just updated ansible to 1.9.4 and now all of a sudden it works. So there's my answer. The code above is fine. Ansible was just broken.