Ansible - ec2_eni reuse ENI - amazon-web-services

I have a pretty basic ansible playbook that creates an ENI:
---
- name: create ENIs
hosts: localhost
tasks:
- name: create eni 1
ec2_eni:
subnet_id: subnet-xxxxxxx
region: us-east-1
description: my-eni
state: present
What i am trying to do is when i rerun this playbook, it does not create the new ENI but verifies that the ENI that was created and exists.
I cannot pass Private IP address as I want to reuse it across multiple accounts we have.
Is it possible to do so?
This is the ENI Module I am using:
http://docs.ansible.com/ansible/ec2_eni_module.html

Assuming ENI description is unique (very important assumption):
tasks:
- ec2_eni_facts:
region: us-east-1
filters:
description: my-eni
register: eni_facts
- name: create eni 1 if not presemt
ec2_eni:
subnet_id: subnet-xxxxxxx
region: us-east-1
description: my-eni
state: present
when: not eni_facts.interfaces

Related

Ansible - play role even in case of previous failure

I am building an amazon AMI builder playbook. The idea is:
spawn up an EC2 instance
provision it
register an AMI
terminate the EC2 instance
I would like to terminate the EC2 instance in any case, even if a previous step failed.
My playbook currently looks like (the spawned EC2 instance is dynamically added to the ec2_servers group in the aws_spawn_ec2 role)):
---
- hosts: localhost
connection: local
gather_facts: False
roles:
- role: aws_spawn_ec2
vars:
ec2_host_group: ec2_servers
- hosts: ec2_servers
roles:
- role: provision_ec2
- hosts: localhost
connection: local
gather_facts: False
roles:
- role: aws_ami_register
- hosts: localhost
connection: local
gather_facts: False
roles:
-role: aws_terminate_ec2
I would like the last play to be run even if a previous play failed. Is there a (preferably clean) way of doing that?
[EDIT]
I tried #Z.Liu answer, I got the following error:
ERROR! 'delegate_to' is not a valid attribute for a IncludeRole
I then tried that:
- name: provision ec2
include_role:
name: provision_ec2
apply:
delegate_to: ec2_servers
But I now have that error:
TASK [provision ec2 : Check if reboot is required] **********************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'reboot_required.stat.exists' failed. The error was: error while evaluating conditional (reboot_required.stat.exists): 'dict object' has no attribute 'stat'"}
I have ansible 2.9.10
Thanks
ansible method
you can leverage ansible delegate_to and block always
delegate_to can let you run the playbook in another hosts
always will execute the task regardless of the previous task results.
- name: update AMI
hosts: localhost
tasks:
- name: spawn new ec2 instance
include_role:
name: aws_spawn_ec2
vars:
ec2_host_group: ec2_servers
- name: provision only spaw ec2 succeed
block:
- name: provision ec2
include_role:
name: provision_ec2
delegate_to: ec2_servers
- name: registe aws AMI
include_role:
name: aws_ami_register
always:
- name: terminate ec2 instance regardless of the ami registration results
include_role:
name: aws_terminate_ec2
You can also use packer, it is more easily to build the AMI in AWS.
https://www.packer.io/intro

Running Cloudformation with Ansible, skips tasks

Please consider the following site.yml
---
- name: Deployment Playbook
hosts: localhost
connection: local
gather_facts: no
environment:
AWS_DEFAULT_REGION: "{{ lookup('env', 'AWS_DEFAULT_REGION') | default('us-east-1', true) }}"
tasks:
- import_tasks: tasks/network/vpc.yml
It runs tasks/network/vpc.yml, which deploys a VPC with a pair of public and private subnets, NAT & route. As defined below:
---
# VPC
- name: This deploys a VPC with a pair of public and private subnets spread across two Availability Zones. It deploys an Internet gateway, with a default route on the public subnets. It deploys a pair of NAT gateways (one in each zone), and default routes for them in the private subnets.
cloudformation:
stack_name: prod-vpc
state: present
region: us-east-1
disable_rollback: true
template: templates/infrastructure/network/vpc.yml
template_parameters:
EnvironmentName: "{{ environment_name }}"
VpcCIDR: 10.40.0.0/16
PublicSubnet1CIDR: 10.40.8.0/21
PublicSubnet2CIDR: 10.40.16.0/21
PrivateSubnet1CIDR: 10.40.24.0/21
PrivateSubnet2CIDR: 10.40.32.0/21
tags:
Environment: "{{ env }}"
Name: prod-vpc
Stack: "{{ stack_name }}"
when: vpc_stack is defined
register: prod_vpc_stack
The given task should run a cloud formation template but it doesn't when I execute it:
$ ansible --version
ansible 2.4.2.0
config file = None
configured module search path = [u'/Users/gaurish/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.4.2.0_1/libexec/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.14 (default, Dec 10 2017, 14:22:32) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.38)]
$ ansible-playbook site.yml
PLAY [Deployment Playbook] **********************************************************************************************************************
TASK [This deploys a VPC with a pair of public and private subnets spread across two Availability Zones. It deploys an Internet gateway, with a default route on the public subnets. It deploys a pair of NAT gateways (one in each zone), and default routes for them in the private subnets.] ***
skipping: [localhost]
PLAY RECAP **************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=0
As I can see for some reason, ansible is skipping the task. I just don't understand why. Does anyone know?
This looks trivial — the task should run when: vpc_stack is defined, but vpc_stack is not defined anywhere in the code you posted in the question, hence the task gets skipped.

How to run playbook after creating ec2 with ansible

I have playbook to create ec2 like this
- name: Create an EC2 Instance
hosts: localmachine
connection: local
vars_files:
- vars/common.yml
roles:
- ec2
Now I also want that after the ec2 is created then I want to run another role inside that ec2 machine. I know how to wait for ec2 to be created but I don't know how to run role in newly created ec2.
This is how I wait for it
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].private_ip" }}"
port: 22
when: ec2_info|changed
but I want another task after that to run different roles inside that ec2
There is a detailed AWS guide with a section on provisioning. The short answer is that you should register the results of your provisioning into a variable (it looks like you've already chosen ec2_info for this) and then add those into another group:
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groups=ec2hosts
with_items: ec2_info.instances
You can then assign roles to that group as you would normally.

start the stopped AWS instances using ansible playbook

I am trying to stop/start the particular group of instances listed in the hosts file under group [target].
following playbook works fine to stop the instances.
---
- hosts: target
remote_user: ubuntu
tasks:
- name: Gather facts
action: ec2_facts
- name: Stop Instances
local_action:
module: ec2
region: "{{region}}"
instance_ids: "{{ansible_ec2_instance_id}}"
state: stopped
But when I am trying to start these instances, it's not working as the ec2_facts is not able to ssh into the instances (since they are stopped now) and get the instance-ids
---
- hosts: target
remote_user: ubuntu
tasks:
- name: start instances
local_action:
module: ec2
region: "{{region}}"
instance_ids: "{{ansible_ec2_instance_id}}"
state: running
I have already seen the documentation which make use of dynamic inventory file for hosts and the way of hard-coding the instance-ids. I want to start the instances whose IPs are listed in the target group of hosts file.
Got the solution, Following is the ansible-task that worked for me.
---
- name: Start instances
hosts: localhost
gather_facts: false
connection: local
vars:
instance_ids:
- 'i-XXXXXXXX'
region: ap-southeast-1
tasks:
- name: Start the feature instances
ec2:
instance_ids: '{{ instance_ids }}'
region: '{{ region }}'
state: running
wait: True
Here is the Blog post on How to start/stop ec2 instances with ansible
You have 2 options:
Option 1
Use AWS CLI to query the instance-id of a stopped instance using its IP or name. For example, to query the instance id for a given instance name:
shell: aws ec2 describe-instances --filters 'Name=tag:Name,Values={{inst_name}}' --output text --query 'Reservations[*].Instances[*].InstanceId'
register: inst_id
Option 2
Upgrade Ansible to version 2.0 (Over the Hills and Far Away) and use the new ec2_remote_facts module
- ec2_remote_facts:
filters:
instance-state-name: stopped
You should add gather_facts: False to prevent Ansible from trying to SSH into the hosts since they're not running:
- hosts: target
remote_user: ubuntu
gather_facts: false
If you need to gather facts after the instances have started up then you can use the setup module to explicitly gather facts after they have booted up.
Edit: I just realized that the issue is that you're trying to access the ansible_ec2_instance_id fact that you can't get because the instance is down. You might want to take a look at this custom module called ec2_lookup that will let you search fetch AWS instance IDs even when the instances are down. Using this you can get a list of the instances you're interested in and then start them up.

Best way to launch aws ec2 instances with ansible

I'm trying to create an small webapp infrastructure with ansible on Amazon AWS and I want to do all the process: launch instance, configure services, etc. but I can't find a proper tool or module to deal with that from ansible. Mainly EC2 Launch.
Thanks a lot.
This is the short answer of your question, if you want detail and fully automated role, please let me know. Thanks
Prerequisite:
Ansible
Python boto library
Set up the AWS access and secret keys in the environment settings
(best is inside the ~./boto)
To Create the EC2 Instance(s):
In order to create the EC2 Instance, please modified these parameters that you can find inside the "ec2_launch.yml" file under "vars":
region # where is want to launch the instance(s), USA, Australia, Ireland etc
count # Number of instance(s), you want to create
Once, you have mentioned these parameter, please run the following command:
ansible-playbook -i hosts ec2_launch.yml
Contents of hosts file:
[local]
localhost
[webserver]
Contents of ec2_launch.yml file:
---
- name: Provision an EC2 Instance
hosts: local
connection: local
gather_facts: False
tags: provisioning
# Necessary Variables for creating/provisioning the EC2 Instance
vars:
instance_type: t1.micro
security_group: webserver # Change the security group name here
image: ami-98aa1cf0 # Change the AMI, from which you want to launch the server
region: us-east-1 # Change the Region
keypair: ansible # Change the keypair name
count: 1
# Task that will be used to Launch/Create an EC2 Instance
tasks:
- name: Create a security group
local_action:
module: ec2_group
name: "{{ security_group }}"
description: Security Group for webserver Servers
region: "{{ region }}"
rules:
- proto: tcp
type: ssh
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: all
type: all
cidr_ip: 0.0.0.0/0
- name: Launch the new EC2 Instance
local_action: ec2
group={{ security_group }}
instance_type={{ instance_type}}
image={{ image }}
wait=true
region={{ region }}
keypair={{ keypair }}
count={{count}}
register: ec2
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
local_action: lineinfile
dest="./hosts"
regexp={{ item.public_ip }}
insertafter="[webserver]" line={{ item.public_ip }}
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: "{{ ec2.instances }}"
- name: Add tag to Instance(s)
local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
with_items: "{{ ec2.instances }}"
args:
tags:
Name: webserver
As others have said, the cloud module contains just about all the AWS provisioning support you'd need. That said, Ansible's paradigm makes most sense once there's an existing SSH:able machine to target and connect to. The instantiation phase, by comparison, essentially asks you to target your local machine and calls AWS API endpoints from there.
Like you, I wanted a single-shot command with a graceful transition from EC2 instantiation into its configuration. There's suggestions on how to accomplish something like this in the documentation, but it relies on the the add_host module to tweak Ansible's idea of current host inventory, and even then I couldn't find a solution that didn't feel like i was working against rather than with the system.
In the end I opted for two distinct playbooks: a provision.yml that uses the ec2, ec2_group, ec2_vol, ec2_eip and route53 modules to ensure I have the "hardware" in place, and then configure.yml, more like a traditional Ansible site.yml, which is able to treat host inventory (static in my case, but dynamic will work well) as a given and do all that good declarative state transitioning.
Both playbooks are idempotent, but it's configure.yml that's meant to be rerun over and over in the long run.
The EC2 module was designed precisely for creating and destroying instances.
If you want the "best" way, it's hard to beat CloudFormation, which can be launched from Ansible.