I am working on an Ansible project in which I would like to add to my Auto-scaling group an existing EC2 instance found by tag-Name. I was able to find it with an AMI or terminating the old instances. But I am simply looking for a way to add them to auto-scaling group like in web management console. Where I just right click on instance, select settings, attach it to auto-scaling group. Below code is all in 1 file.
Find EC2 instances:
- hosts: localhost
connection: local
gather_facts: no
tasks:
- ec2_remote_facts:
region: eu-central-1
filters:
"tag:Name": Ubuntu_From_AMI
register: ec2found
- name: Add found instances to group
add_host: hostname="{{ item.public_ip_address }}" groups=ec2instances
with_items: "{{ ec2found.instances }}"
Here is how I am adding the auto-scaling group :
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Add auto-scaling groups.
ec2_asg:
name: magento_scaling_group
load_balancers: 'LB_NAME'
availability_zones: [ 'eu-central-1a', 'eu-central-1b', 'eu-central-1c' ]
launch_config_name: "{{ lc.name }}"
min_size: 0
max_size: 5
desired_capacity: 0
vpc_zone_identifier: [ 'subnet-e712ad8c', 'subnet-e12e8dac', 'subnet-28e91a55' ]
tags:
- environment: production
propagate_at_launch: no
Is it possible? Thank you.
Based on the current list of modules, it appears there is no such functionality. You'll need to create a new module or just cheat and use the aws cli in a normal command: invocation. If you go the route of creating a new module, please do consider submitting it as a PR to the Ansible project so others will benefit from your work.
Related
The objective is to spin up multiple instances which can be achieved using count but I have been give specific range of private IP addresses, and want to assign them to the instances.
Below is my present playbook,
---
- name: Provision an EC2 Instance
hosts: local
connection: local
gather_facts: False
tags: provisioning
# Necessary Variables for creating/provisioning the EC2 Instance
vars:
instance_type: t2.micro
security_group: default # Change the security group name here
image: ami-a9d276c9 # Change the AMI, from which you want to launch the server
region: us-west-2 # Change the Region
keypair: ansible # Change the keypair name
ip_addresses:
- 172.31.1.117/32
- 172.31.1.118/32
count: 2
tasks:
- name: Launch the new EC2 Instance
local_action: ec2
group={{ security_group }}
instance_type={{ instance_type}}
image={{ image }}
wait=true
region={{ region }}
keypair={{ keypair }}
count={{count}}
vpc_subnet_id=subnet-xxxxxxx
# private_ip={{private_ip}}
with_items: ip_addresses
register: ec2
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: ec2.instances
- name: Add tag to Instance(s)
local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
with_items: ec2.instances
args:
tags:
Name: ansible
- name: Update system
apt: update_cache=yes
- name: Install Git
apt:
name: git
state: present
- name: Install Python2.7
apt:
name: python=2.7
state: present
- name: Install Java
apt:
name: openjdk-8-jdk
state: present
Which is although bringing up the instances but not assigning the IP addresses intended to be assigned. and I'm getting following warning
PLAY [Provision an EC2 Instance] ***********************************************
TASK [Launch the new EC2 Instance] *********************************************
changed: [localhost -> localhost] => (item=172.31.1.117/32)
changed: [localhost -> localhost] => (item=172.31.1.118/32)
[DEPRECATION WARNING]: Skipping task due to undefined attribute, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be
disabled by setting deprecation_warnings=False in ansible.cfg.
Please suggest me the best possible way to achieve this.
You are giving count=2, so 2 instances will be launched
Your IP addresses are wrong, you are giving a CIDR instead of IP
You are not using the IP address anywhere in your code when launching the instances
How to fix?
ip_addresses:
- 172.31.1.117
- 172.31.1.118
Don't specify count in ec2 module
Loop through the list of ipaddresses (there are 2 of them)
Make sure you use the IP by referencing {item}
Like this:
private_ip={{item}}
I am attempting to install Apache on an EC2 instance through Ansible. My playbook looks like this:
# Configure and deploy Apache
- hosts: localhost
connection: local
remote_user: ec2-user
gather_facts: false
roles:
- ec2_apache
- apache
The 'ec2_apache' role provisions an EC2 instance and the first task within the apache/main.yml looks like this:
- name: confirm using the latest Apache server
become: yes
become_method: sudo
yum:
name: httpd
state: latest
However, I am getting the following error:
"module_stderr": "sudo: a password is required\n"
I did take a look at: How to switch a user per task or set of tasks? but it did not seem to resolve my problem.
Because the configuration of the Ec2 instance is in one role and the installation of Apache is in another, did I hork up the security in some way?
The issue you've got is that your playbook that runs both roles is targeting localhost so your Apache role is trying to run sudo yum install httpd locally rather than on the target EC2 instance.
As the ec2 module docs example shows you need to use the add_host module to add your new EC2 instance(s) to a group that you can then target with a further play.
So your playbook might look something like this:
# Configure and deploy Apache
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ec2-user
gather_facts: false
roles:
- ec2_apache
- name: install Apache
hosts: launched
remote_user: ec2-user
roles:
- apache
And then, as per the example in the ec2 module docs, just do something like this in your ec2_apache role:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
register: ec2
- name: Add new instance to host group
add_host: hostname={{ item.public_ip }} groupname=launched
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
As an aside you can see quickly that your ec2_apache role is actually pretty generic and you could turn this into a generic ec2_provision role that all sorts of other things could use, helping you re-use your code.
This is what I did to install apache.
Based on #ydaetskcoR suggestion, all I added was connection: local to fix the following problems.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive).", "unreachable": true}
See code below
---
- name: Install Apache and other packages
hosts: localhost
become: yes
connection: local
gather_facts: false
tasks:
- name: Install a list of packages with a list variable
yum:
name: "{{ packages }}"
state: latest
vars:
packages:
- httpd
- httpd-tools
- nginx
register: result
you also have to run your code as follows: -K stands for --ask-become-pass
ansible-playbook -i hosts.ini startapache.yml -K -vvv
Are you sure you are using ansible correctly and are you provindig a password for sudo on the remote host?
Just use --ask-become-pass when you execute the playbook. You should be prompted for the password.
I am spining multiple ec2 instances in AWS and installing cassandra on those instances.
i got stucked up at updating ip addresses of those instances dynamically in the cassandra files.
I tried using set facts module to pass variables between different plays, it is updating the ip address of the last machine built out of the three ec2 instances in all the files.
My use case is to update the ip address in the file with regard to that ec2 instance.
###########################################################
Here is my playbook which consists of two plays:
#### Play1 - to spin 3 ec2 instances in AWS##########
- name: Play1
hosts: local
connection: local
gather_facts: True
vars:
key_location: "path to pem file location"
server_name: dbservers
private_ip: item.private_ip
tasks:
- name: create ec2 instance
ec2:
key_name: {{ my_key_name}}
region: us-east-1
instance_type: t2.micro
image: ami-8fcee4e5
wait: yes
group: {{ my_security_group_name}}
count: 3
vpc_subnet_id: {{ my_subnet_id }}
instance_tags:
Name: devops-i-cassandra1-d-1c-common
Stack: Ansible
Owner: devops
register: ec2
- name: set facts ## to capture the ip addresses of the ec2 instances, but only last ip is being captured
set_fact:
one_fact={{ item.private_ip }}
with_items: ec2.instances
- name: debugging private ip value
debug: var=one_fact
- name: Add the newly created EC2 instance(s) to the dbservers group in the inventory file
local_action: lineinfile
dest="/home/admin/hosts"
regexp={{ item.private_ip }}
insertafter="[dbservers]" line={{ item.private_ip }}
with_items: ec2.instances
- name: Create Host Group to login dynamically to EC2 Instance
add_host:
hostname={{ item.private_ip }}
groupname={{ server_name }}
ansible_ssh_private_key_file={{ key_location }}
ansible_ssh_user=ec2-user
ec2_id={{ item.id }}
with_items: ec2.instances
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.private_ip }}
port=22
delay=60
timeout=360
state=started
with_items: ec2.instances
####################Play2-Installing and Configuring Cassandra on Ec2 Instances
- name: Play2
hosts: dbservers
remote_user: ec2-user
sudo: yes
vars:
private_ip: "{{ hostvars.localhost.one_fact }}"
vars_files:
- ["/home/admin/vars/var.yml"]
tasks:
- name: invoke a shell script to install cassandra
script: /home/admin/cassandra.sh creates=/home/ec2-user/cassandra.sh
- name: configure cassandra.yaml file
template: src=/home/admin/cassandra.yaml dest=/etc/dse/cassandra/cassandra.yaml owner=ec2-user group=ec2-user mode=755
#
Thanks in advance
With ansible 2.0+, you refresh the dynamic inventory in the middle of the playbook as the task like this:
- meta: refresh_inventory
To extend this a bit, If you are getting problem with the cache in your playbook, then you can use it like this:
- name: Refresh the ec2.py cache
shell: "./inventory/ec2.py --refresh-cache"
changed_when: no
- name: Refresh inventory
meta: refresh_inventory
where ./inventory is the path to your dynamic inventory, please adjust it accordingly.
During the creation of your EC2 instances, you have added the tags to them, which you can use now with the dynamic inventory to configure these instances. Your second play will be like this:
- name: Play2
hosts: tag_Name_devops-i-cassandra1-d-1c-common
remote_user: ec2-user
sudo: yes
tasks:
- name: ---------
Hope this will help you.
I am using ec2.py dynamic inventory for provisioning with ansible.
I have placed the ec2.py in /etc/ansible/hosts file and marked it executable.
I also have the ec2.ini file in /etc/ansible/hosts.
[ec2]
regions = us-west-2
regions_exclude = us-gov-west-1,cn-north-1
destination_variable = public_dns_name
vpc_destination_variable = ip_address
route53 = False
all_instances = True
all_rds_instances = False
cache_path = ~/.ansible/tmp
cache_max_age = 0
nested_groups = False
group_by_instance_id = True
group_by_region = True
group_by_availability_zone = True
group_by_ami_id = True
group_by_instance_type = True
group_by_key_pair = True
group_by_vpc_id = True
group_by_security_group = True
group_by_tag_keys = True
group_by_tag_none = True
group_by_route53_names = True
group_by_rds_engine = True
group_by_rds_parameter_group = True
Above is my ec2.ini file
---
- hosts: localhost
connection: local
gather_facts: yes
vars_files:
- ../group_vars/dev_vpc
- ../group_vars/dev_sg
- ../hosts_vars/ec2_info
vars:
instance_type: t2.micro
tasks:
- name: Provisioning EC2 instance
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: yes
group_id: ["{{ sg_npm }}", "{{sg_ssh}}"]
vpc_subnet_id: "{{ PublicSubnet }}"
source_dest_check: false
instance_tags: '{"Name": "EC2", "Environment": "Development"}'
register: ec2
- name: associate new EIP for the instance
local_action:
module: ec2_eip
region: "{{ region }}"
instance_id: "{{ item.id }}"
with_items: ec2.instances
- name: Waiting for NPM Server to come-up
local_action:
module: wait_for
host: "{{ ec2 }}"
state: started
delay: 5
timeout: 200
- include: ec2-configure.yml
Now the configuring script is as follows
- name: Configure EC2 server
hosts: tag_Name_EC2
user: ec2-user
sudo: True
gather_facts: True
tasks:
- name: Install nodejs related packages
yum: name={{ item }} enablerepo=epel state=present
with_items:
- nodejs
- npm
However when the configure script is called, the second script results into no hosts found.
If I execute the ec2-configure.yml just alone and if the EC2 server is up & running then it is able to find it and configure it.
I added the wait_for to make sure that the instance is in running state before the ec2-configure.yml is called.
Would appreciate if anyone can point my error. Thanks
After researching I came to know that the dynamic inventory doesnt refresh between playbook calls, it will only refresh if you are executing the playbook seprately.
However I was able to resolve the issue by using add_host command.
- name: Add Server to inventory
local_action: add_host hostname={{ item.public_ip }} groupname=webserver
with_items: webserver.instances
With ansible 2.0+, you refresh the dynamic inventory in the middle of the playbook as the task like this:
- meta: refresh_inventory
To extend this a bit, If you are getting problem with the cache in your playbook, then you can use it like this:
- name: Refresh the ec2.py cache
shell: "./inventory/ec2.py --refresh-cache"
changed_when: no
- name: Refresh inventory
meta: refresh_inventory
where ./inventory is the path to your dynamic inventory, please adjust it accordingly.
Hope this will help you.
Configure EC2 server play can't find any hosts from EC2 dynamic inventory because the new instance was added in the first play of the playbook - during the same execution. Group tag_Name_EC2 didn't exist in the inventory when the inventory was read and thus can't be found.
When you run the same playbook again Configure EC2 server should find the group.
We have used the following workaround to guide users in this kind of situations.
First, provision the instance:
tasks:
- name: Provisioning EC2 instance
local_action:
module: ec2
...
register: ec2
Then add a new play before ec2-configure.yml. The play uses ec2 variable that was registered in Provisioning EC2 instance and will fail and exit the playbook if any instances were launched:
- name: Stop and request a re-run if any instances were launched
hosts: localhost
gather_facts: no
tasks:
- name: Stop if instances were launched
fail: msg="Re-run the playbook to load group variables from EC2 dynamic inventory for the just launched instances!"
when: ec2.changed
- include: ec2-configure.yml
You can also refresh the cache:
ec2.py --refresh-cache
Or if your using as the Ansible host file:
/etc/ansible/hosts --refresh-cache
This is probably obvious, but how do you execute an operation against a set of servers in Ansible (this is with the EC2 plugin)?
I can create my instances:
---
- hosts: 127.0.0.1
connection: local
- name: Launch instances
local_action:
module: ec2
region: us-west-1
group: cassandra
keypair: cassandra
instance_type: t2.micro
image: ami-4b6f650e
count: 1
wait: yes
register: cass_ec2
And I can put the instances into a tag:
- name: Add tag to instances
local_action: ec2_tag resource={{ item.id }} region=us-west-1 state=present
with_items: cass_ec2.instances
args:
tags:
Name: cassandra
Now, let's say I want to run an operation on each server:
# This does not work - It runs the command on localhost
- name: TEST - touch file
file: path=/test.txt state=touch
with_items: cass_ec2.instances
How to run the command against the remote instances just created?
For running against just the newly created servers, I use a temporary group name and do something like the following by using a second play in the same playbook:
- hosts: localhost
tasks:
- name: run your ec2 create a server code here
...
register: cass_ec2
- name: add host to inventory
add_host: name={{ item.private_ip }} groups=newinstances
with_items: cas_ec2.instances
- hosts: newinstances
tasks:
- name: do some fun stuff on the new instances here
Alternatively if you have consistently tagged all your servers (and with multiple tags if you also have to differentiate between production and development; and you are also using the ec2.py as the dynamic inventory script; and you are running this against all the servers in a second playbook run, then you can easily do something like the following:
- hosts: tag_Name_cassandra
tasks:
- name: run your cassandra specific tasks here
Personally I use a mode tag (tag_mode_production vs tag_mode_development) as well in the above and force Ansible to only run on servers of a specific type (in your case Name=cassandra) in a specific mode (development). This looks like the following:
- hosts: tag_Name_cassandra:&tag_mode_development
Just make sure you specify the tag name and value correctly - it is case sensitive...
Please use the following playbook pattern to perform the both operations in a single playbook (means lauch an ec2 instance(s) and perform the certain tasks on it/them ) at the same time.
Here is the working playbook, that perform the following task, this playbook suppose that you have the hosts file in this same directory, where you are running the playbook:
---
- name: Provision an EC2 Instance
hosts: local
connection: local
gather_facts: False
tags: provisioning
# Necessary Variables for creating/provisioning the EC2 Instance
vars:
instance_type: t1.micro
security_group: cassandra
image: ami-4b6f650e
region: us-west-1
keypair: cassandra
count: 1
# Task that will be used to Launch/Create an EC2 Instance
tasks:
- name: Launch the new EC2 Instance
local_action: ec2
group={{ security_group }}
instance_type={{ instance_type}}
image={{ image }}
wait=true
region={{ region }}
keypair={{ keypair }}
count={{count}}
register: ec2
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
local_action: lineinfile
dest="./hosts"
regexp={{ item.public_ip }}
insertafter="[cassandra]" line={{ item.public_ip }}
with_items: ec2.instances
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: ec2.instances
- name: Add tag to Instance(s)
local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
with_items: ec2.instances
args:
tags:
Name: cassandra
- name: SSH to the EC2 Instance(s)
add_host: hostname={{ item.public_ip }} groupname=cassandra
with_items: ec2.instances
- name: Install these things on Newly created EC2 Instance(s)
hosts: cassandra
sudo: True
remote_user: ubuntu # Please change the username here,like root or ec2-user, as I am supposing that you are lauching ubuntu instance
gather_facts: True
# Run these tasks
tasks:
- name: TEST - touch file
file: path=/test.txt state=touch
Your hosts file should be look like this:
[local]
localhost
[cassandra]
Now you can run this playbook like this:
ansible-playbook -i hosts ec2_launch.yml