Error when taking snapshot of volume - amazon-web-services

I want to create a sanpshot of a volumne but getting error
ERROR: volume_id is not a legal parameter in an Ansible task or handler
Here is my yml file
- hosts: localhost
connection: local
gather_facts: False
tasks:
- local_action:
module: ec2_snapshot
volume_id: vol-3bca8f4d
description: snapshot of volume
I am executig it as ansible-playbook
My ansible version is 1.6.5
What is wrong in this yml file??

When you use local_action:, you add the module name as a first parameter (without "module:") and continue with each parameter as if you where writing it the usual way.
- local_action: module param1=first param2=second
So in your case:
- local_action: ec2_snapshot volume_id=vol-3bca8f4d description="snapshot of volume"
Or, if you prefer multiline:
- local_action:
ec2_snapshot
volume_id=vol-3bca8f4d
description="snapshot of volume"
If you don't want to remember the difference between syntax, you could just do:
- ec2_snapshot: volume_id=vol-3bca8f4d description="snapshot of volume"
delegate_to: 127.0.0.1

Related

Ansible add EC2 with add_host but connection gives missing Python error

I have used Ansible to create 1 AWS EC2 instance using the examples in the Ansible ec2 documentation. I can successfully create the instance with a tag. Then I temporarily add it to my local inventory group using add_host.
After doing this, I am having trouble when I try to configure the newly created instance. In my Ansible play, I would like to specify the instance by its tag name. eg. hosts: <tag_name_here>, but I am getting an error.
Here is what I have done so far:
My directory layout is
inventory/
staging/
hosts
group_vars/
all/
all.yml
site.yml
My inventory/staging/hosts file is
[local]
localhost ansible_connection=local ansible_python_interpreter=/home/localuser/ansible_ec2/.venv/bin/python
My inventory/staging/group_vars/all/all.yml file is
---
ami_image: xxxxx
subnet_id: xxxx
region: xxxxx
launched_tag: tag_Name_NginxDemo
Here is my Ansible playbook site.yml
- name: Launch instance
hosts: localhost
gather_facts: no
tasks:
- ec2:
key_name: key-nginx
group: web_sg
instance_type: t2.micro
image: "{{ ami_image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
instance_tags:
Name: NginxDemo
exact_count: 1
count_tag:
Name: NginxDemo
exact_count: 1
register: ec2
- name: Add EC2 instance to inventory group
add_host:
hostname: "{{ item.public_ip }}"
groupname: tag_Name_NginxDemo
ansible_user: centos_user
ansible_become: yes
with_items: "{{ ec2.instances }}"
- name: Configure EC2 instance in launched group
hosts: tag_Name_NginxDemo
become: True
gather_facts: no
tasks:
- ping:
I run this playbook with
$ cd /home/localuser/ansible_ec2
$ source .venv/bin/activate
$ ansible-playbook -i inventory/staging site.yml -vvv`
and this creates the EC2 instance - the 1st play works correctly. However, the 2nd play gives the following error
TASK [.....] ******************************************************************
The authenticity of host 'xx.xxx.xxx.xx (xx.xxx.xxx.xx)' can't be established.
ECDSA key fingerprint is XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.
Are you sure you want to continue connecting (yes/no)? yes
fatal: [xx.xxx.xxx.xx]: FAILED! => {"changed": false, "module_stderr":
"Shared connection to xx.xxx.xxx.xx closed.\r\n", "module_stdout": "/bin/sh:
1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "rc": 127}
I followed the instructions from
this SO question to create the task with add_hosts
here to set gather_facts: False, but this still does not allow the play to run correctly.
How can I target the EC2 host using the tag name?
EDIT:
Additional info
This is the only playbook I have run to this point. I see this message requires Python but I cannot install Python on the instance as I cannot connect to it in my play Configure EC2 instance in launched group...if I could make that connection, then I could install Python (if this is the problem). Though, I'm not sure how to connect to the instance.
EDIT 2:
Here is my Python info on the localhost where I am running Ansible
I am running Ansible inside a Python venv.
Here is my python inside the venv
$ python --version
Python 2.7.15rc1
$ which python
~/ansible_ec2/.venv/bin/python
Here are my details about Ansible that I installed inside the Python venv
ansible 2.6.2
config file = /home/localuser/ansible_ec2/ansible.cfg
configured module search path = [u'/home/localuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/localuser/ansible_ec2/.venv/local/lib/python2.7/site-packages/ansible
executable location = /home/localuser/ansible_ec2/.venv/bin/ansible
python version = 2.7.15rc1 (default, xxxx, xxxx) [GCC 7.3.0]
Ok, so after a lot of searching, I found 1 possible workaround here. Basically, this workaround uses the lineinfile module and adds the new EC2 instance details to the hosts file permanently....not just for the in-memory plays following the add_host task. I followed this suggestion very closely and this approach worked for me. I did not need to use the add_host module.
EDIT:
The line I added in the lineinfile module was
- name: Add EC2 instance to inventory group
- lineinfile: line="{{ item.public_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./inventory/staging/hosts
with_items: "{{ ec2.instances }}"

Getting error in the first line of YAML file

This is my yml script which I want to run to create an ec2 instance. But i'm getting error in the first line no matter what I put in there.
SYNTAX CHECKED AND IS FINE.
error:
"ERROR! A malformed block was encountered.
The error appears to have been in '/root/aws-create/ec2c.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
-
name: Stage instance(s)
^ here"
-
name: Stage instance(s)
hosts: aws
connection: ssh
remote_user: ec2-user
become: yes
gather_facts: false
vars:
keypair: newrhel_2
instance_type: t2.micro
security_group: default
image: ami-b55a51cc
aws_region: us-west-2
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
tasks:
ec2:
aws_access_key={{aws_access_key}}
aws_secret_key={{aws_secret_key}}
keypair={{keypair}}
group={{security_group}}
instance_type={{instance_type}}
image={{image}}
region={{aws_region}}
wait=true
count=1
tasks should contain a list, not a dictionary (and because you use Ansible notation, ec2 is not valid as a YAML dictionary). You need to add a hyphen before ec2:
tasks:
- ec2:

Installing Apache through Ansible

I am attempting to install Apache on an EC2 instance through Ansible. My playbook looks like this:
# Configure and deploy Apache
- hosts: localhost
connection: local
remote_user: ec2-user
gather_facts: false
roles:
- ec2_apache
- apache
The 'ec2_apache' role provisions an EC2 instance and the first task within the apache/main.yml looks like this:
- name: confirm using the latest Apache server
become: yes
become_method: sudo
yum:
name: httpd
state: latest
However, I am getting the following error:
"module_stderr": "sudo: a password is required\n"
I did take a look at: How to switch a user per task or set of tasks? but it did not seem to resolve my problem.
Because the configuration of the Ec2 instance is in one role and the installation of Apache is in another, did I hork up the security in some way?
The issue you've got is that your playbook that runs both roles is targeting localhost so your Apache role is trying to run sudo yum install httpd locally rather than on the target EC2 instance.
As the ec2 module docs example shows you need to use the add_host module to add your new EC2 instance(s) to a group that you can then target with a further play.
So your playbook might look something like this:
# Configure and deploy Apache
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ec2-user
gather_facts: false
roles:
- ec2_apache
- name: install Apache
hosts: launched
remote_user: ec2-user
roles:
- apache
And then, as per the example in the ec2 module docs, just do something like this in your ec2_apache role:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
register: ec2
- name: Add new instance to host group
add_host: hostname={{ item.public_ip }} groupname=launched
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
As an aside you can see quickly that your ec2_apache role is actually pretty generic and you could turn this into a generic ec2_provision role that all sorts of other things could use, helping you re-use your code.
This is what I did to install apache.
Based on #ydaetskcoR suggestion, all I added was connection: local to fix the following problems.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive).", "unreachable": true}
See code below
---
- name: Install Apache and other packages
hosts: localhost
become: yes
connection: local
gather_facts: false
tasks:
- name: Install a list of packages with a list variable
yum:
name: "{{ packages }}"
state: latest
vars:
packages:
- httpd
- httpd-tools
- nginx
register: result
you also have to run your code as follows: -K stands for --ask-become-pass
ansible-playbook -i hosts.ini startapache.yml -K -vvv
Are you sure you are using ansible correctly and are you provindig a password for sudo on the remote host?
Just use --ask-become-pass when you execute the playbook. You should be prompted for the password.

Failed Create AMI With Ansible Playbook

I've try to create AMI using ansible-playbook, i've already export aws secret key & access into path and ansible version is 2.0.0.
- hosts: localhost
tasks:
- name: create ami
ec2_ami:
region: "ap-southeast-1"
instance_id: "i-c2xxxx"
name: "jmicro"
wait: yes
register: ami
but when i run command : ansible-playbook create_ami.yml, i get this error :
ERROR: Syntax Error while loading YAML script, create_ami.yml
Note: The error may actually appear before this position: line 5, column 1
ec2_ami:
region: "ap-southeast-1"
is there something wrong with my yaml script ? but when i run :
# ansible localhost -m ec2_ami -a "instance_id=i-c2xxxx region=ap-southeast-1 wait=yes name=jmicro"
it's success !!
There are some strange characters in front of tasks. Don't know what that is but when I copy your code block and paste it into my editor, then try to remove the whitespaces with Backspace, it actually deletes the characters on the right side of the curser.
YAML only allows whitespaces for indenting lines. I think that is the issue here. And tasks needs to be on the same level as hosts. Other than that your definition looks OK to me.
- hosts: localhost
tasks:
- name: create ami
ec2_ami:
region: "ap-southeast-1"
instance_id: "i-c2xxxx"
name: "jmicro"
wait: yes
register: ami

Ansible EC2 - Perform operation on set of instances

This is probably obvious, but how do you execute an operation against a set of servers in Ansible (this is with the EC2 plugin)?
I can create my instances:
---
- hosts: 127.0.0.1
connection: local
- name: Launch instances
local_action:
module: ec2
region: us-west-1
group: cassandra
keypair: cassandra
instance_type: t2.micro
image: ami-4b6f650e
count: 1
wait: yes
register: cass_ec2
And I can put the instances into a tag:
- name: Add tag to instances
local_action: ec2_tag resource={{ item.id }} region=us-west-1 state=present
with_items: cass_ec2.instances
args:
tags:
Name: cassandra
Now, let's say I want to run an operation on each server:
# This does not work - It runs the command on localhost
- name: TEST - touch file
file: path=/test.txt state=touch
with_items: cass_ec2.instances
How to run the command against the remote instances just created?
For running against just the newly created servers, I use a temporary group name and do something like the following by using a second play in the same playbook:
- hosts: localhost
tasks:
- name: run your ec2 create a server code here
...
register: cass_ec2
- name: add host to inventory
add_host: name={{ item.private_ip }} groups=newinstances
with_items: cas_ec2.instances
- hosts: newinstances
tasks:
- name: do some fun stuff on the new instances here
Alternatively if you have consistently tagged all your servers (and with multiple tags if you also have to differentiate between production and development; and you are also using the ec2.py as the dynamic inventory script; and you are running this against all the servers in a second playbook run, then you can easily do something like the following:
- hosts: tag_Name_cassandra
tasks:
- name: run your cassandra specific tasks here
Personally I use a mode tag (tag_mode_production vs tag_mode_development) as well in the above and force Ansible to only run on servers of a specific type (in your case Name=cassandra) in a specific mode (development). This looks like the following:
- hosts: tag_Name_cassandra:&tag_mode_development
Just make sure you specify the tag name and value correctly - it is case sensitive...
Please use the following playbook pattern to perform the both operations in a single playbook (means lauch an ec2 instance(s) and perform the certain tasks on it/them ) at the same time.
Here is the working playbook, that perform the following task, this playbook suppose that you have the hosts file in this same directory, where you are running the playbook:
---
- name: Provision an EC2 Instance
hosts: local
connection: local
gather_facts: False
tags: provisioning
# Necessary Variables for creating/provisioning the EC2 Instance
vars:
instance_type: t1.micro
security_group: cassandra
image: ami-4b6f650e
region: us-west-1
keypair: cassandra
count: 1
# Task that will be used to Launch/Create an EC2 Instance
tasks:
- name: Launch the new EC2 Instance
local_action: ec2
group={{ security_group }}
instance_type={{ instance_type}}
image={{ image }}
wait=true
region={{ region }}
keypair={{ keypair }}
count={{count}}
register: ec2
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
local_action: lineinfile
dest="./hosts"
regexp={{ item.public_ip }}
insertafter="[cassandra]" line={{ item.public_ip }}
with_items: ec2.instances
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: ec2.instances
- name: Add tag to Instance(s)
local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
with_items: ec2.instances
args:
tags:
Name: cassandra
- name: SSH to the EC2 Instance(s)
add_host: hostname={{ item.public_ip }} groupname=cassandra
with_items: ec2.instances
- name: Install these things on Newly created EC2 Instance(s)
hosts: cassandra
sudo: True
remote_user: ubuntu # Please change the username here,like root or ec2-user, as I am supposing that you are lauching ubuntu instance
gather_facts: True
# Run these tasks
tasks:
- name: TEST - touch file
file: path=/test.txt state=touch
Your hosts file should be look like this:
[local]
localhost
[cassandra]
Now you can run this playbook like this:
ansible-playbook -i hosts ec2_launch.yml