Launch multiple volumes with ec2 instance using ansible - amazon-web-services

I am provisioning an ec2 instance with number of volumes attached to it. Following is my playbook to do the same.
---
- hosts: localhost
connection: local
gather_facts: false
vars:
instance_type: 't2.micro'
region: 'my-region'
aws_zone: 'myzone'
security_group: my-sg
image: ami-sample
keypair: my-keypair
vpc_subnet_id: my-subnet
tasks:
- name: Launch instance
ec2:
image: "{{ image }}"
instance_type: "{{ instance_type }}"
keypair: "{{ keypair}}"
instance_tags: '{"Environment":"test","Name":"test-provisioning"}'
region: "{{region}}"
aws_zone: "{{ region }}{{ aws_zone }}"
group: "{{ security_group }}"
vpc_subnet_id: "{{vpc_subnet_id}}"
wait: true
volumes:
- device_name: "{{ item }}"
with_items:
- /dev/sdb
- /dev/sdc
volume_type: gp2
volume_size: 100
delete_on_termination: true
encrypted: true
register: ec2_info
But getting following error
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined
If I replace the {{item}} with /dev/sdb the instance gets launched with the specific volume easily. But I want to create more than one volume with the specified list of items - /dev/sdb, /dev/sdc etc
Any possible way to achieve this?

You can't use with_items with vars and parameters – it's just for tasks.
You need to construct your volumes list in advance:
- name: Populate volumes list
set_fact:
vol:
device_name: "{{ item }}"
volume_type: gp2
volume_size: 100
delete_on_termination: true
encrypted: true
with_items:
- /dev/sdb
- /dev/sdc
register: volumes
And then exec ec2 module with:
volumes: "{{ volumes.results | map(attribute='ansible_facts.vol') | list }}"
Update: another approach without set_fact:
Define a variable – kind of template dictionary for volume (without device_name):
vol_default:
volume_type: gp2
volume_size: 100
delete_on_termination: true
encrypted: true
Then in your ec2 module you can use:
volumes: "{{ [{'device_name': '/dev/sdb'},{'device_name': '/dev/sdc'}] | map('combine',vol_default) | list }}"

Related

Ansible AWS: Unable to connect to EC2 instance

What I want to achieve
I want to create an EC2 instance with LAMP stack installed using one Ansible playbook.
Problem
The instance creation works fine, and I can modify it in the EC2 Console, but the problem appears when trying to access the instance for example install apache or create keys.
This is the error:
fatal: [35.154.26.86]: UNREACHABLE! => {
"changed": false,
"msg": "[Errno None] Unable to connect to port 22 on or 35.154.26.86",
"unreachable": true
}
Error Screenshot
Code
This is my playbook:
---
- name: Power up an ec2 with LAMP stack installed
hosts: localhost
become: true
become_user: root
gather_facts: False
vars:
keypair: myKeyPair
security_group: launch-wizard-1
instance_type: t2.micro
image: ami-47205e28
region: x-x-x
tasks:
- name: Adding Python-pip
apt: name=python-pip state=latest
- name: Install Boto Library
pip: name=boto
- name: Launch instance (Amazon Linux)
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
aws_access_key: "xxxxxxxxxxxxxxxxxxx"
aws_secret_key: "Xxxxxxxxxxxxxxxxxxx"
register: ec2
- name: Print all ec2 variables
debug: var=ec2
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groups=ec2hosts
with_items: "{{ ec2.instances }}"
- hosts: ec2hosts
remote_user: ec2-user
become: true
gather_facts: false
tasks:
#I need help here, don't know what to do.
- name: Create an EC2 key
ec2_key:
name: "privateKey"
region: "x-x-x"
register: ec2_key
- name: Save private key
copy: content="{{ ec2_key.private_key }}" dest="./privateKey.pem" mode=0600
when: ec2_key.changed
# The Rest is installing LAMP
Information:
1- My hosts file is default.
2- I used this command to run the playbook:
sudo ansible-playbook lamp.yml -vvv -c paramiko
3- launch-wizard-1 has SSH.
4- myKeyPair is a public key imported from my device to the console(don't know if this is ok)
5- I am a big newbie
Ansible requires Python installed on VM to work.
Here is your required code:
- name: upload an ssh keypair to ec2
hosts: localhost
connection: local
gather_facts: False
vars:
keypair_name: Key_name
key_material: "{{ lookup('file', 'keyfile') }}"
region: "{{ region }}"
tasks:
- name: ssh keypair for ec2
ec2_key:
aws_access_key: "xxxxxxxxxxxxxxxxxxx"
aws_secret_key: "Xxxxxxxxxxxxxxxxxxx"
region: "{{ region }}"
name: "{{ keypair_name }}"
key_material: "{{ key_material }}"
state: present
- name: Power up an ec2 with LAMP stack installed
hosts: localhost
become: true
become_user: root
gather_facts: False
vars:
keypair: myKeyPair
security_group: launch-wizard-1
instance_type: t2.micro
image: ami-47205e28
region: x-x-x
my_user_data: | # install Python: Ansible needs Python pre-installed on the instance to work!
#!/bin/bash
sudo apt-get install python -y
tasks:
- name: Adding Python-pip
apt: name=python-pip state=latest
- name: Install Boto Library
pip: name=boto
- name: Launch instance (Amazon Linux)
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
wait_timeout: 300
user_data: "{{my_user_data}}"
region: "{{ region }}"
aws_access_key: "xxxxxxxxxxxxxxxxxxx"
aws_secret_key: "Xxxxxxxxxxxxxxxxxxx"
register: ec2
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groups=ec2hosts
with_items: "{{ ec2.instances }}"

Ansible: provision newly allocated ec2 instance

This playbook appears to be SSHing onto my local machine rather than the remote one. This condition is guessed based on the output I've included at the bottom.
I've adapted the example from here: http://docs.ansible.com/ansible/guide_aws.html#provisioning
The playbook is split into two plays:
creation of the EC2 instance and
configuration of the EC2 instance
Note: To run this you'll need to create a key-pair with the same name as the project (you can get more information here: https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#KeyPairs:sort=keyName)
The playbook is listed below:
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
vars:
project_name: my-test
tasks:
- name: Get the current username
local_action: command whoami
register: username_on_the_host
- name: Capture current instances
ec2_remote_facts:
region: "us-west-2"
register: ec2_instances
- name: Create instance
ec2:
region: "us-west-2"
zone: "us-west-2c"
keypair: "{{ project_name }}"
group:
- "SSH only"
instance_type: "t2.nano"
image: "ami-59799439" # debian:jessie amd64 hvm on us-west 2
count_tag: "{{ project_name }}-{{ username_on_the_host.stdout }}-test"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ project_name }}-{{ username_on_the_host.stdout }}-test"
"{{ project_name }}-{{ username_on_the_host.stdout }}-test": simple_ec2
Creator: "{{ username_on_the_host.stdout }}"
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ item.public_dns_name }}"
port: 22
with_items: "{{ ec2_info.instances }}"
when: ec2_info|changed
- name: Add new instance to launched group
add_host:
hostname: "{{ item.public_dns_name }}"
groupname: launched
with_items: "{{ ec2_info.instances }}"
when: ec2_info|changed
- name: Get ec2_info information
debug:
msg: "{{ ec2_info }}"
# Configure and install all we need
- hosts: launched
remote_user: admin
gather_facts: true
tasks:
- name: Display all variables/facts known for a host
debug:
var: hostvars[inventory_hostname]
- name: List hosts
debug: msg="groups={{groups}}"
- name: Get current user
command: whoami
- name: Prepare system
become: yes
become_method: sudo
apt: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
The output I have is:
TASK [Get current user] ********************************************************
changed: [ec2-35-167-142-43.us-west-2.compute.amazonaws.com] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.006532", "end": "2017-01-09 14:53:55.806000", "rc": 0, "start": "2017-01-09 14:53:55.799468", "stderr": "", "stdout": "brianbruggeman", "stdout_lines": ["brianbruggeman"], "warnings": []}
TASK [Prepare system] **********************************************************
failed: [ec2-35-167-142-43.us-west-2.compute.amazonaws.com] (item=['software-properties-common', 'python-software-properties', 'devscripts', 'build-essential', 'libffi-dev', 'libssl-dev', 'vim']) => {"failed": true, "item": ["software-properties-common", "python-software-properties", "devscripts", "build-essential", "libffi-dev", "libssl-dev", "vim"], "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE"}
This should work.
- name: Create Ec2 Instances
hosts: localhost
connection: local
gather_facts: False
vars:
project_name: device-graph
ami_id: ami-59799439 # debian jessie 64-bit hvm
region: us-west-2
zone: "us-west-2c"
instance_size: "t2.nano"
tasks:
- name: Provision a set of instances
ec2:
key_name: my_key
group: ["SSH only"]
instance_type: "{{ instance_size }}"
image: "{{ ami_id }}"
wait: true
exact_count: 1
count_tag:
Name: "{{ project_name }}-{{ username.stdout }}-test"
Creator: "{{ username.stdout }}"
Project: "{{ project_name }}"
instance_tags:
Name: "{{ project_name }}-{{ username.stdout }}-test"
Creator: "{{ username.stdout }}"
Project: "{{ project_name }}"
register: ec2
- name: Add all instance public IPs to host group
add_host:
hostname: "{{ item.public_ip }}"
groups: launched_ec2_hosts
with_items: "{{ ec2.tagged_instances }}"
- name: configuration play
hosts: launched_ec2_hosts
user: admin
gather_facts: true
vars:
ansible_ssh_private_key_file: "~/.ssh/project-name.pem"
tasks:
- name: get the username running the deploy
shell: whoami
register: username

Ansible ec2 module ignores "volumes" parameter

I'm trying to get Ansible to bring up new ec2 boxes for me with a volume size larger than the default ~8g. I've added the volumes option with volume_size specified, but when I run with that, the volumes option seems to be ignored and I still get a new box with ~8g. The relevant part of my playbook is as follows:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
volumes:
- device_name: /dev/sda1
volume_size: 15
register: ec2
What am I doing wrong?
So, I just updated ansible to 1.9.4 and now all of a sudden it works. So there's my answer. The code above is fine. Ansible was just broken.

Create many AWS instances with Ansible, 'count' does not work

I have this playbook:
---
# Run it like this:
# ansible-playbook --extra-vars '{"VAR":"var-value", "VAR":"var-value"}' playbook-name.yml
- hosts: localhost
vars:
instance_tag : "{{ TAG }}"
instances_num: 2
tasks:
- name: Create new AWS instances
local_action:
module: ec2
region: us-east-1
key_name: integration
instance_type: m3.medium
image: ami-61dcvfa
group: mysecgroup
instance_tags:
Name: "{{ instance_tag }}"
with_sequence: count = {{ instances_num | int }}
When I run it it throws this:
TASK: [Create new AWS instances] **********************************************
fatal: [localhost] => unknown error parsing with_sequence arguments: u'count = 1'
FATAL: all hosts have already failed -- aborting
What am I doing wrong?
I have tried with 2 also, but throws the same error.
I have tried also with "{{instances_num}}" but nothing.
The ec2 module has a count parameter that you can use directly rather than trying to loop the task over a sequence.
You can use it like this:
---
# Run it like this:
# ansible-playbook --extra-vars '{"VAR":"var-value", "VAR":"var-value"}' playbook-name.yml
- hosts: localhost
vars:
instance_tag : "{{ TAG }}"
instances_num: 2
tasks:
- name: Create new AWS instances
local_action:
module: ec2
region: us-east-1
key_name: integration
instance_type: m3.medium
image: ami-61dcvfa
group: mysecgroup
instance_tags:
Name: "{{ instance_tag }}"
count: "{{ instances_num }}"

Ansible: allocating an elastic ip to newly created instance

I am creating a new instance with ansible and want to associate an elastic ip to it. What value should i write in instance_id? instance_id: "{{ newinstance.instances[0].id }}" ??? But this value seems to be wrong, because i have an output after checking:
TASK: [Allocating elastic IP to instance] *************************************
fatal: [localhost] => One or more undefined variables: 'dict object' has no attribute 'instances'
---
- name: Setup an EC2 instance
hosts: localhost
connection: local
tasks:
- name: Create an EC2 machine
ec2:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
key_name: my_key
instance_type: t1.micro
region: us-east-1
image: some_ami
wait: yes
vpc_subnet_id: my_subnet
assign_public_ip: yes
register: newinstance
- name: Allocating elastic IP to instance
ec2_eip:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
in_vpc: yes
reuse_existing_ip_allowed: yes
state: present
region: us-east-1
instance_id: "{{ newinstance.instances[0].id }}"
register: instance_eip
- debug: var=instance_eip.public_ip
- name: Wait for SSH to start
wait_for:
host: "{{ newinstance.instances[0].private_ip }}"
port: 22
timeout: 300
sudo: false
delegate_to: "127.0.0.1"
- name: Add the machine to the inventory
add_host:
hostname: "{{ newinstance.instances[0].private_ip }}"
groupname: new
What should i put instead "{{ newinstance.instances[0].id }}"? The same question is about "{{ newinstance.instances[0].private_ip }}".
You are basically trying to parse data from the JSON output of Ansible task which is given to your variable. instance_ids is an array and child of newinstance JSON. Similarly private_ip is a direct child of newinstance
---
- name: Setup an EC2 instance
hosts: localhost
connection: local
tasks:
- name: Create an EC2 machine
ec2:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
key_name: my_key
instance_type: t1.micro
region: us-east-1
image: some_ami
wait: yes
vpc_subnet_id: my_subnet
assign_public_ip: yes
register: newinstance
- name: Allocating elastic IP to instance
ec2_eip:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
in_vpc: yes
reuse_existing_ip_allowed: yes
state: present
region: us-east-1
instance_id: "{{ newinstance.instance_ids[0] }}"
register: instance_eip
- debug: var=instance_eip.public_ip
- name: Wait for SSH to start
wait_for:
host: "{{ newinstance.private_ip }}"
port: 22
timeout: 300
sudo: false
delegate_to: "127.0.0.1"
- name: Add the machine to the inventory
add_host:
hostname: "{{ newinstance.private_ip }}"
groupname: new