i just was trying some stuff with ansible but i really appreciate if anyone can reproduce this or at least can explain this to me.
I'm trying to deploy instances with ansible on AWS. I'm using ansible (2.9.6) from a virtual machine deployed with Vagrant on W10 host.
I wrote this playbook:
---
- name: Configuring the EC2 instance
hosts: localhost
connection: local
vars:
count: '{{ count }}'
volumes:
- device_name: /dev/sda1
volume_size: '{{ volume_size }}'
tasks:
- name: Launch Instances
ec2:
instance_type: '{{ instance_type }}'
image: '{{ ami }}'
region: '{{ region }}'
key_name: '{{ pem }}'
count: '{{ count }}'
group_id: '{{ sec_grp }}'
wait: true
volumes: '{{ volumes }}'
register: ec2
- name: Associating after allocating eip
ec2_eip:
in_vpc: yes
reuse_existing_ip_allowed: yes
state: present
region: '{{ region }}'
device_id: '{{ ec2.instance_ids[0] }}'
register: elastic_ip
- name: Adding tags to the instance
local_action:
module: ec2_tag
region: '{{ region }}'
resource: '{{ item.id }}'
state: present
with_items: '{{ ec2.instances }}'
args:
tags:
Name: '{{ tag_name }}'
Env: '{{ tag_env }}'
Type: AppService
register: tag
I execute the next command with --extra-vars:
ansible-playbook ec2.yml --extra-vars instance_type=t2.micro -e ami=ami-04d5cc9b88f9d1d39 -e region=eu-west-1 -e pem=keyname -e count=1 -e sec_grp=sg-xx -e volume_size=10 -e tag_name=prueba -e tag_env=dev -vvv
ami-04d5cc9b88f9d1d39 = Amazon Linux 2 AMI (HVM), SSD Volume Type for eu-west-1
When the playbook finish to run succesfully and on my aws-console seeing the instance booting and changed to running state (initializing), suddenly change to terminating and finally to stopped state.
On my terminal the playbook runs fine and i get:
PLAY RECAP **************************
localhost : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I spent whole day trying to fix this, changing '''state: running''' and adding new tasks to make instance explicitly runnning. Well, finally the problem was the AMI. I changed it to an ubuntu ami: ami-035966e8adab4aaad on same region and works fine, it's still runing.
I used this ami before (amazon linux 2) with cloudformation and terraform and never happened something like this, always boot fine.
Well, if anybody have any idea why this is happening or there is something that i'm missing, i really like to know it.
Take care!
I was able to reproduce this issue. I tried everything you have mentioned. Copied yml file you have provided, executed with exact same values (except key-pair and security group). EC2 goes into running state and then stops.
To isolate the issue, I did the same thing in another region (AP-South-1), with same Amazon Linux 2 ami, but the same error was reproduced.
This is because, Amazon Linux 2 AMI requires EBS to be mounted at /dev/xvda and not at /dev/sda1. /dev/sda1 is used when AMI is ubuntu.
Since the AMI and mount path for root EBS is not compatible, EC2 instance goes into stopped state after initializing.
Refer this SO issue : AWS EC2 Instance stopping right after start using boto3
Update the "volume" part of yml and it should work fine.
Related
I have used Ansible to create 1 AWS EC2 instance using the examples in the Ansible ec2 documentation. I can successfully create the instance with a tag. Then I temporarily add it to my local inventory group using add_host.
After doing this, I am having trouble when I try to configure the newly created instance. In my Ansible play, I would like to specify the instance by its tag name. eg. hosts: <tag_name_here>, but I am getting an error.
Here is what I have done so far:
My directory layout is
inventory/
staging/
hosts
group_vars/
all/
all.yml
site.yml
My inventory/staging/hosts file is
[local]
localhost ansible_connection=local ansible_python_interpreter=/home/localuser/ansible_ec2/.venv/bin/python
My inventory/staging/group_vars/all/all.yml file is
---
ami_image: xxxxx
subnet_id: xxxx
region: xxxxx
launched_tag: tag_Name_NginxDemo
Here is my Ansible playbook site.yml
- name: Launch instance
hosts: localhost
gather_facts: no
tasks:
- ec2:
key_name: key-nginx
group: web_sg
instance_type: t2.micro
image: "{{ ami_image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
instance_tags:
Name: NginxDemo
exact_count: 1
count_tag:
Name: NginxDemo
exact_count: 1
register: ec2
- name: Add EC2 instance to inventory group
add_host:
hostname: "{{ item.public_ip }}"
groupname: tag_Name_NginxDemo
ansible_user: centos_user
ansible_become: yes
with_items: "{{ ec2.instances }}"
- name: Configure EC2 instance in launched group
hosts: tag_Name_NginxDemo
become: True
gather_facts: no
tasks:
- ping:
I run this playbook with
$ cd /home/localuser/ansible_ec2
$ source .venv/bin/activate
$ ansible-playbook -i inventory/staging site.yml -vvv`
and this creates the EC2 instance - the 1st play works correctly. However, the 2nd play gives the following error
TASK [.....] ******************************************************************
The authenticity of host 'xx.xxx.xxx.xx (xx.xxx.xxx.xx)' can't be established.
ECDSA key fingerprint is XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.
Are you sure you want to continue connecting (yes/no)? yes
fatal: [xx.xxx.xxx.xx]: FAILED! => {"changed": false, "module_stderr":
"Shared connection to xx.xxx.xxx.xx closed.\r\n", "module_stdout": "/bin/sh:
1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "rc": 127}
I followed the instructions from
this SO question to create the task with add_hosts
here to set gather_facts: False, but this still does not allow the play to run correctly.
How can I target the EC2 host using the tag name?
EDIT:
Additional info
This is the only playbook I have run to this point. I see this message requires Python but I cannot install Python on the instance as I cannot connect to it in my play Configure EC2 instance in launched group...if I could make that connection, then I could install Python (if this is the problem). Though, I'm not sure how to connect to the instance.
EDIT 2:
Here is my Python info on the localhost where I am running Ansible
I am running Ansible inside a Python venv.
Here is my python inside the venv
$ python --version
Python 2.7.15rc1
$ which python
~/ansible_ec2/.venv/bin/python
Here are my details about Ansible that I installed inside the Python venv
ansible 2.6.2
config file = /home/localuser/ansible_ec2/ansible.cfg
configured module search path = [u'/home/localuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/localuser/ansible_ec2/.venv/local/lib/python2.7/site-packages/ansible
executable location = /home/localuser/ansible_ec2/.venv/bin/ansible
python version = 2.7.15rc1 (default, xxxx, xxxx) [GCC 7.3.0]
Ok, so after a lot of searching, I found 1 possible workaround here. Basically, this workaround uses the lineinfile module and adds the new EC2 instance details to the hosts file permanently....not just for the in-memory plays following the add_host task. I followed this suggestion very closely and this approach worked for me. I did not need to use the add_host module.
EDIT:
The line I added in the lineinfile module was
- name: Add EC2 instance to inventory group
- lineinfile: line="{{ item.public_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./inventory/staging/hosts
with_items: "{{ ec2.instances }}"
I followed these 3 guides:
http://docs.ansible.com/ansible/guide_aws.html
http://docs.ansible.com/ansible/ec2_module.html
https://gist.github.com/tristanfisher/e5a306144a637dc739e7
and I wrote this Ansible play
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- include_vars: aws_credentials.yml
- name: Creating EC2 Ubuntu instance
ec2:
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
key_name: my-aws-key
zone: us-west-2a
vpc_subnet_id: subnet-04199d61
group_id: sg-cf6736aa
assign_public_ip: yes
count: 1
wait: true
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: 10
instance_tags:
Name: ansible-test
Project: test
Ansible: manageable
register: ec2
then I run ansible-playbook create-ec2.yml -v --private-key ~/.ssh/my-key --vault-password-file ~/.password/to_ansible_vault
and I was getting this message
PLAY [localhost] ***************************************************************
TASK [include_vars] ************************************************************
ok: [localhost] => {"ansible_facts": {"ec2_access_key": "decrypted_acces_key_XXXXX", "ec2_secret_key": "decrypted_secret_key_XXXXX"}, "changed": false}
TASK [Creating EC2 Ubuntu instance] ********************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'create-ec2.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
when I ran ansible-vault view aws_credentials.yml --vault-password-file ~/.password/to_ansible_vault I got readable content of encrypted aws_credentials.yml,
something like this :
---
ec2_access_key: "XXXXX"
ec2_secret_key: "XXXXX"
Also when I used plain aws_credentials.yml, it doesn't work. Only when I export my credentials, it works without any failure.
Could somebody help me, how can I write playbook for creating ec2 instance with credentials stored in encrypted file?
I think you should supply your keys directly to ec2 module in this case.
Try this:
- name: Creating EC2 Ubuntu instance
ec2:
aws_access_key: "{{ ec2_access_key }}"
aws_secret_key: "{{ ec2_secret_key }}"
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
...
The code suggests that it only checks module's arguments and environment variables, not host variables.
Also you can export your AWS API keys to OS environment variables, like a:
export AWS_ACCESS_KEY=XXXXXXX
In that case in Ansible scenario you need to set:
- name: Creating EC2 Ubuntu instance
ec2:
aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY') }}"
aws_secret_key: "{{ lookup('env', 'AWS_SECRET_KEY') }}"
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
I am trying to learn the Ansible with all my AWS stuff. So the first task which I want to do is creation the basic EC2 instance with mounted volumes.
I wrote the Playbook according to Ansible docs, but it doesn't really work. My Playbook:
# The play operates on the local (Ansible control) machine.
- name: Create a basic EC2 instance v.1.1.0 2015-10-14
hosts: localhost
connection: local
gather_facts: false
# Vars.
vars:
hostname: Test_By_Ansible
keypair: MyKey
instance_type: t2.micro
security_group: my security group
image: ami-d05e75b8 # Ubuntu Server 14.04 LTS (HVM)
region: us-east-1 # US East (N. Virginia)
vpc_subnet_id: subnet-b387e763
sudo: True
locale: ru_RU.UTF-8
# Launch instance. Register the output.
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: yes
wait: true
wait_timeout: 500
count: 1 # number of instances to launch
instance_tags:
Name: "{{ hostname }}"
os: Ubuntu
type: WebService
register: ec2
# Create and attach a volumes.
- name: Create and attach a volumes
ec2_vol:
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
volume_size: 1 # in GB
volume_type: gp2
device_name: /dev/sdf
with_items: ec2.instances
register: ec2_vol
# Configure mount points.
- name: Configure mount points - mount device by name
mount: name=/system src=/dev/sda1 fstype=ext4 opts='defaults nofail 0 2' state=present
mount: name=/data src=/dev/xvdf fstype=ext4 opts='defaults nofail 0 2' state=present
But this Playbook crushes on volumes mount with error:
fatal: [localhost] => One or more undefined variables: 'item' is undefined
How can I resolve this?
You seem to have copy/pasted a lot of stuff all at once, and rather than needing a specific bit of information that SO can help you with, you need to go off and learn the basics of Ansible so you can think through all the individual bits that don't match up in this playbook.
Let's look at the specific error that you're hitting - item is undefined. It's triggered here:
# Create and attach a volumes.
- name: Create and attach a volumes
ec2_vol:
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
volume_size: 1 # in GB
volume_type: gp2
device_name: /dev/sdf
with_items: ec2.instances
register: ec2_vol
This task is meant to be looping through every item in a list, and in this case the list is ec2.instances. It isn't, because with_items should be de-indented so it sits level with register.
If you had a list of instances (which you don't, as far as I can see), it'd use the id for the for each one in that {{ item.id }} line... but then probably throw an error, because I don't think they'd all be allowed to have the same name.
Go forth and study, and you can figure out this kind of detail.
I am using ec2.py dynamic inventory for provisioning with ansible.
I have placed the ec2.py in /etc/ansible/hosts file and marked it executable.
I also have the ec2.ini file in /etc/ansible/hosts.
[ec2]
regions = us-west-2
regions_exclude = us-gov-west-1,cn-north-1
destination_variable = public_dns_name
vpc_destination_variable = ip_address
route53 = False
all_instances = True
all_rds_instances = False
cache_path = ~/.ansible/tmp
cache_max_age = 0
nested_groups = False
group_by_instance_id = True
group_by_region = True
group_by_availability_zone = True
group_by_ami_id = True
group_by_instance_type = True
group_by_key_pair = True
group_by_vpc_id = True
group_by_security_group = True
group_by_tag_keys = True
group_by_tag_none = True
group_by_route53_names = True
group_by_rds_engine = True
group_by_rds_parameter_group = True
Above is my ec2.ini file
---
- hosts: localhost
connection: local
gather_facts: yes
vars_files:
- ../group_vars/dev_vpc
- ../group_vars/dev_sg
- ../hosts_vars/ec2_info
vars:
instance_type: t2.micro
tasks:
- name: Provisioning EC2 instance
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: yes
group_id: ["{{ sg_npm }}", "{{sg_ssh}}"]
vpc_subnet_id: "{{ PublicSubnet }}"
source_dest_check: false
instance_tags: '{"Name": "EC2", "Environment": "Development"}'
register: ec2
- name: associate new EIP for the instance
local_action:
module: ec2_eip
region: "{{ region }}"
instance_id: "{{ item.id }}"
with_items: ec2.instances
- name: Waiting for NPM Server to come-up
local_action:
module: wait_for
host: "{{ ec2 }}"
state: started
delay: 5
timeout: 200
- include: ec2-configure.yml
Now the configuring script is as follows
- name: Configure EC2 server
hosts: tag_Name_EC2
user: ec2-user
sudo: True
gather_facts: True
tasks:
- name: Install nodejs related packages
yum: name={{ item }} enablerepo=epel state=present
with_items:
- nodejs
- npm
However when the configure script is called, the second script results into no hosts found.
If I execute the ec2-configure.yml just alone and if the EC2 server is up & running then it is able to find it and configure it.
I added the wait_for to make sure that the instance is in running state before the ec2-configure.yml is called.
Would appreciate if anyone can point my error. Thanks
After researching I came to know that the dynamic inventory doesnt refresh between playbook calls, it will only refresh if you are executing the playbook seprately.
However I was able to resolve the issue by using add_host command.
- name: Add Server to inventory
local_action: add_host hostname={{ item.public_ip }} groupname=webserver
with_items: webserver.instances
With ansible 2.0+, you refresh the dynamic inventory in the middle of the playbook as the task like this:
- meta: refresh_inventory
To extend this a bit, If you are getting problem with the cache in your playbook, then you can use it like this:
- name: Refresh the ec2.py cache
shell: "./inventory/ec2.py --refresh-cache"
changed_when: no
- name: Refresh inventory
meta: refresh_inventory
where ./inventory is the path to your dynamic inventory, please adjust it accordingly.
Hope this will help you.
Configure EC2 server play can't find any hosts from EC2 dynamic inventory because the new instance was added in the first play of the playbook - during the same execution. Group tag_Name_EC2 didn't exist in the inventory when the inventory was read and thus can't be found.
When you run the same playbook again Configure EC2 server should find the group.
We have used the following workaround to guide users in this kind of situations.
First, provision the instance:
tasks:
- name: Provisioning EC2 instance
local_action:
module: ec2
...
register: ec2
Then add a new play before ec2-configure.yml. The play uses ec2 variable that was registered in Provisioning EC2 instance and will fail and exit the playbook if any instances were launched:
- name: Stop and request a re-run if any instances were launched
hosts: localhost
gather_facts: no
tasks:
- name: Stop if instances were launched
fail: msg="Re-run the playbook to load group variables from EC2 dynamic inventory for the just launched instances!"
when: ec2.changed
- include: ec2-configure.yml
You can also refresh the cache:
ec2.py --refresh-cache
Or if your using as the Ansible host file:
/etc/ansible/hosts --refresh-cache
This is probably obvious, but how do you execute an operation against a set of servers in Ansible (this is with the EC2 plugin)?
I can create my instances:
---
- hosts: 127.0.0.1
connection: local
- name: Launch instances
local_action:
module: ec2
region: us-west-1
group: cassandra
keypair: cassandra
instance_type: t2.micro
image: ami-4b6f650e
count: 1
wait: yes
register: cass_ec2
And I can put the instances into a tag:
- name: Add tag to instances
local_action: ec2_tag resource={{ item.id }} region=us-west-1 state=present
with_items: cass_ec2.instances
args:
tags:
Name: cassandra
Now, let's say I want to run an operation on each server:
# This does not work - It runs the command on localhost
- name: TEST - touch file
file: path=/test.txt state=touch
with_items: cass_ec2.instances
How to run the command against the remote instances just created?
For running against just the newly created servers, I use a temporary group name and do something like the following by using a second play in the same playbook:
- hosts: localhost
tasks:
- name: run your ec2 create a server code here
...
register: cass_ec2
- name: add host to inventory
add_host: name={{ item.private_ip }} groups=newinstances
with_items: cas_ec2.instances
- hosts: newinstances
tasks:
- name: do some fun stuff on the new instances here
Alternatively if you have consistently tagged all your servers (and with multiple tags if you also have to differentiate between production and development; and you are also using the ec2.py as the dynamic inventory script; and you are running this against all the servers in a second playbook run, then you can easily do something like the following:
- hosts: tag_Name_cassandra
tasks:
- name: run your cassandra specific tasks here
Personally I use a mode tag (tag_mode_production vs tag_mode_development) as well in the above and force Ansible to only run on servers of a specific type (in your case Name=cassandra) in a specific mode (development). This looks like the following:
- hosts: tag_Name_cassandra:&tag_mode_development
Just make sure you specify the tag name and value correctly - it is case sensitive...
Please use the following playbook pattern to perform the both operations in a single playbook (means lauch an ec2 instance(s) and perform the certain tasks on it/them ) at the same time.
Here is the working playbook, that perform the following task, this playbook suppose that you have the hosts file in this same directory, where you are running the playbook:
---
- name: Provision an EC2 Instance
hosts: local
connection: local
gather_facts: False
tags: provisioning
# Necessary Variables for creating/provisioning the EC2 Instance
vars:
instance_type: t1.micro
security_group: cassandra
image: ami-4b6f650e
region: us-west-1
keypair: cassandra
count: 1
# Task that will be used to Launch/Create an EC2 Instance
tasks:
- name: Launch the new EC2 Instance
local_action: ec2
group={{ security_group }}
instance_type={{ instance_type}}
image={{ image }}
wait=true
region={{ region }}
keypair={{ keypair }}
count={{count}}
register: ec2
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
local_action: lineinfile
dest="./hosts"
regexp={{ item.public_ip }}
insertafter="[cassandra]" line={{ item.public_ip }}
with_items: ec2.instances
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: ec2.instances
- name: Add tag to Instance(s)
local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
with_items: ec2.instances
args:
tags:
Name: cassandra
- name: SSH to the EC2 Instance(s)
add_host: hostname={{ item.public_ip }} groupname=cassandra
with_items: ec2.instances
- name: Install these things on Newly created EC2 Instance(s)
hosts: cassandra
sudo: True
remote_user: ubuntu # Please change the username here,like root or ec2-user, as I am supposing that you are lauching ubuntu instance
gather_facts: True
# Run these tasks
tasks:
- name: TEST - touch file
file: path=/test.txt state=touch
Your hosts file should be look like this:
[local]
localhost
[cassandra]
Now you can run this playbook like this:
ansible-playbook -i hosts ec2_launch.yml