Ansible: allocating an elastic ip to newly created instance - amazon-web-services

I am creating a new instance with ansible and want to associate an elastic ip to it. What value should i write in instance_id? instance_id: "{{ newinstance.instances[0].id }}" ??? But this value seems to be wrong, because i have an output after checking:
TASK: [Allocating elastic IP to instance] *************************************
fatal: [localhost] => One or more undefined variables: 'dict object' has no attribute 'instances'
---
- name: Setup an EC2 instance
hosts: localhost
connection: local
tasks:
- name: Create an EC2 machine
ec2:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
key_name: my_key
instance_type: t1.micro
region: us-east-1
image: some_ami
wait: yes
vpc_subnet_id: my_subnet
assign_public_ip: yes
register: newinstance
- name: Allocating elastic IP to instance
ec2_eip:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
in_vpc: yes
reuse_existing_ip_allowed: yes
state: present
region: us-east-1
instance_id: "{{ newinstance.instances[0].id }}"
register: instance_eip
- debug: var=instance_eip.public_ip
- name: Wait for SSH to start
wait_for:
host: "{{ newinstance.instances[0].private_ip }}"
port: 22
timeout: 300
sudo: false
delegate_to: "127.0.0.1"
- name: Add the machine to the inventory
add_host:
hostname: "{{ newinstance.instances[0].private_ip }}"
groupname: new
What should i put instead "{{ newinstance.instances[0].id }}"? The same question is about "{{ newinstance.instances[0].private_ip }}".

You are basically trying to parse data from the JSON output of Ansible task which is given to your variable. instance_ids is an array and child of newinstance JSON. Similarly private_ip is a direct child of newinstance
---
- name: Setup an EC2 instance
hosts: localhost
connection: local
tasks:
- name: Create an EC2 machine
ec2:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
key_name: my_key
instance_type: t1.micro
region: us-east-1
image: some_ami
wait: yes
vpc_subnet_id: my_subnet
assign_public_ip: yes
register: newinstance
- name: Allocating elastic IP to instance
ec2_eip:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
in_vpc: yes
reuse_existing_ip_allowed: yes
state: present
region: us-east-1
instance_id: "{{ newinstance.instance_ids[0] }}"
register: instance_eip
- debug: var=instance_eip.public_ip
- name: Wait for SSH to start
wait_for:
host: "{{ newinstance.private_ip }}"
port: 22
timeout: 300
sudo: false
delegate_to: "127.0.0.1"
- name: Add the machine to the inventory
add_host:
hostname: "{{ newinstance.private_ip }}"
groupname: new

Related

ansible undefined variable for a play while creating an ec2 instance

I am getting the undefined variable error while creating EC2 instance in AWS Cloud with Ansible playbook.
My Ansible version is 2.9.
My Playbook:
---
- name: Running AWS EC2 Play ...
hosts: localhost
connection: local
gather_facts: yes
tasks:
- include_vars: aws_vars.yml
- name: Setting up the Security Group for new instance
ec2_group:
name: "{{ aws_hostname }}-nsg"
description: Allowing Traffic on port 22 and 80
region: "{{ aws_region }}"
aws_secret_key: "{{ SecretKey }}"
aws_access_key: "{{ AccessKey }}"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: all
cidr_ip: 0.0.0.0/0
vpc_id: "{{ aws_vpc_id }}"
- name: Spinning up new aws ec2 instance
ec2:
aws_access_key: "{{ AccessKey }}"
aws_secret_key: "{{ SecretKey }}"
image: "{{ aws_image_id }}"
instance_type: "{{ aws_instance_type }}"
instance_tags: { "Name":"{{ aws_hostname }}", "Vendor": "Amazon" }
key_name: "{{ aws_instance_key }}"
assign_public_ip: no
count: 1
wait: yes
wait_timeout: 500
region: "{{ aws_region }}"
volumes:
- device_name: "{{ aws_device_name }}"
device_type: "{{ aws_device_type }}"
volume_size: "{{ aws_volume_size }}"
delete_on_termination: true
vpc_subnet_id: "{{ aws_subnet_id }}"
group: "{{ aws_hostname }}-nsg"
user_data: hostnamectl set-hostname "{{ aws_hostname }}"
register: ec2_result
- debug:
var: ec2_result
- name: Wait for SSH to come up
wait_for:
host: '{{ private_ip }}'
port: 22
delay: 60
timeout: 320
state: started
loop: '{{ ec2_result.instances }}'
- name: Add all instance public IPs to host group
add_host:
hostname: '{{ private_ip }}'
groups: ec2_hosts
loop: '{{ ec2_result.instances }}'
The error
TASK [Wait for SSH to come up] ******************************************************************************************************************************
task path: /home/user1/create_awsVM.yml:61
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'private_ip' is undefined\n\nThe error appears to be in '/home/user1/create_awsVM.yml': line 61, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Wait for SSH to come up\n ^ here\n"
}
PLAY RECAP **************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
What I also tried:
- name: Wait for SSH to come up
wait_for:
host: '{{ item.private_ip }}'
port: 22
delay: 60
timeout: 320
state: started
loop: '{{ ec2_result.instances }}'
I used item as item.private_ip as thought it might work with loop but no luck.
Any help will be much appreciated.

create an aws ec2 instance on centos 7 through ansible playbook

I created a playbook aws.yml and want to run it over my local host
---
- hosts: webserver
- vars:
region: ap-south-1
instance_type: t2.micro
ami: ami-005956c5f0f757d37 # Amazon linux LTS
keypair: ansible # pem file name
- tasks:
- ec2:
key_name: "{{ ansible }}"
group: ansible # security group name
instance_type: "{{ t2.micro }}"
image: "{{ ami-005956c5f0f757d37 }}"
wait: true
wait_timeout: 500
region: "{{ ap-south-1 }}"
count: 1 # default
count_tag:
Name: Prod-Instance
instance_tags:
Name: Prod-Instance
vpc_subnet_id: subnet-00efd068
assign_public_ip: yes
Content of the host file /etc/ansible/hosts
[web]
localhost
When I try to run my aws.yml, it gives the below error
[root#localhost ~]# ansible-playbook aws.yml
PLAY [web] ********************************************************************************************************************************************************************************************************
ERROR! the field 'hosts' is required but was not set
[root#localhost ~]#
Your playbook and your hosts group name does not match.
In aws.yml
- hosts: webserver
And in /etc/ansible/hosts
[web]
You should either change your aws.yml playbook to read
- hosts: web
Or your hosts file /etc/ansible/hosts to read
[webserver]
Too many -s. Remove the ones in front of tasks and vars.
Also, use delegate_to: localhost on the ec2 task:
---
- hosts: webserver
vars:
region: ap-south-1
instance_type: t2.micro
ami: ami-005956c5f0f757d37 # Amazon linux LTS
keypair: ansible # pem file name
tasks:
- ec2:
key_name: "{{ ansible }}"
group: ansible # security group name
instance_type: "{{ t2.micro }}"
image: "{{ ami-005956c5f0f757d37 }}"
wait: true
wait_timeout: 500
region: "{{ ap-south-1 }}"
count: 1 # default
count_tag:
Name: Prod-Instance
instance_tags:
Name: Prod-Instance
vpc_subnet_id: subnet-00efd068
assign_public_ip: yes
delegate_to: localhost

Not able to Delete AWS VPC using ansible playbook

`---
- hosts: localhost
gather_facts: true
vars_files:
- group_vars/delete-vpc.yml
vars:
region: ap-south-1
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
tasks:
- name: delete the vpc
ec2_vpc_net:
name: test-vpc
cidr_block: 10.22.0.0/16
region: ap-south-1
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
profile: "{{ build_env.profile }}"
state: absent
tenancy: dedicated
purge_cidrs: yes
register: vpc_delete`
---
- hosts: localhost
gather_facts: true
vars_files:
- group_vars/delete-vpc.yml
vars:
region: ap-south-1
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
tasks:
- name: delete the vpc
ec2_vpc_net:
name: test-vpc
cidr_block: 10.22.0.0/16
region: ap-south-1
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
state: absent
purge_cidrs: yes
register: vpc_delete

Ansible AWS: Unable to connect to EC2 instance

What I want to achieve
I want to create an EC2 instance with LAMP stack installed using one Ansible playbook.
Problem
The instance creation works fine, and I can modify it in the EC2 Console, but the problem appears when trying to access the instance for example install apache or create keys.
This is the error:
fatal: [35.154.26.86]: UNREACHABLE! => {
"changed": false,
"msg": "[Errno None] Unable to connect to port 22 on or 35.154.26.86",
"unreachable": true
}
Error Screenshot
Code
This is my playbook:
---
- name: Power up an ec2 with LAMP stack installed
hosts: localhost
become: true
become_user: root
gather_facts: False
vars:
keypair: myKeyPair
security_group: launch-wizard-1
instance_type: t2.micro
image: ami-47205e28
region: x-x-x
tasks:
- name: Adding Python-pip
apt: name=python-pip state=latest
- name: Install Boto Library
pip: name=boto
- name: Launch instance (Amazon Linux)
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
aws_access_key: "xxxxxxxxxxxxxxxxxxx"
aws_secret_key: "Xxxxxxxxxxxxxxxxxxx"
register: ec2
- name: Print all ec2 variables
debug: var=ec2
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groups=ec2hosts
with_items: "{{ ec2.instances }}"
- hosts: ec2hosts
remote_user: ec2-user
become: true
gather_facts: false
tasks:
#I need help here, don't know what to do.
- name: Create an EC2 key
ec2_key:
name: "privateKey"
region: "x-x-x"
register: ec2_key
- name: Save private key
copy: content="{{ ec2_key.private_key }}" dest="./privateKey.pem" mode=0600
when: ec2_key.changed
# The Rest is installing LAMP
Information:
1- My hosts file is default.
2- I used this command to run the playbook:
sudo ansible-playbook lamp.yml -vvv -c paramiko
3- launch-wizard-1 has SSH.
4- myKeyPair is a public key imported from my device to the console(don't know if this is ok)
5- I am a big newbie
Ansible requires Python installed on VM to work.
Here is your required code:
- name: upload an ssh keypair to ec2
hosts: localhost
connection: local
gather_facts: False
vars:
keypair_name: Key_name
key_material: "{{ lookup('file', 'keyfile') }}"
region: "{{ region }}"
tasks:
- name: ssh keypair for ec2
ec2_key:
aws_access_key: "xxxxxxxxxxxxxxxxxxx"
aws_secret_key: "Xxxxxxxxxxxxxxxxxxx"
region: "{{ region }}"
name: "{{ keypair_name }}"
key_material: "{{ key_material }}"
state: present
- name: Power up an ec2 with LAMP stack installed
hosts: localhost
become: true
become_user: root
gather_facts: False
vars:
keypair: myKeyPair
security_group: launch-wizard-1
instance_type: t2.micro
image: ami-47205e28
region: x-x-x
my_user_data: | # install Python: Ansible needs Python pre-installed on the instance to work!
#!/bin/bash
sudo apt-get install python -y
tasks:
- name: Adding Python-pip
apt: name=python-pip state=latest
- name: Install Boto Library
pip: name=boto
- name: Launch instance (Amazon Linux)
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
wait_timeout: 300
user_data: "{{my_user_data}}"
region: "{{ region }}"
aws_access_key: "xxxxxxxxxxxxxxxxxxx"
aws_secret_key: "Xxxxxxxxxxxxxxxxxxx"
register: ec2
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groups=ec2hosts
with_items: "{{ ec2.instances }}"

Ansible: provision newly allocated ec2 instance

This playbook appears to be SSHing onto my local machine rather than the remote one. This condition is guessed based on the output I've included at the bottom.
I've adapted the example from here: http://docs.ansible.com/ansible/guide_aws.html#provisioning
The playbook is split into two plays:
creation of the EC2 instance and
configuration of the EC2 instance
Note: To run this you'll need to create a key-pair with the same name as the project (you can get more information here: https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#KeyPairs:sort=keyName)
The playbook is listed below:
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
vars:
project_name: my-test
tasks:
- name: Get the current username
local_action: command whoami
register: username_on_the_host
- name: Capture current instances
ec2_remote_facts:
region: "us-west-2"
register: ec2_instances
- name: Create instance
ec2:
region: "us-west-2"
zone: "us-west-2c"
keypair: "{{ project_name }}"
group:
- "SSH only"
instance_type: "t2.nano"
image: "ami-59799439" # debian:jessie amd64 hvm on us-west 2
count_tag: "{{ project_name }}-{{ username_on_the_host.stdout }}-test"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ project_name }}-{{ username_on_the_host.stdout }}-test"
"{{ project_name }}-{{ username_on_the_host.stdout }}-test": simple_ec2
Creator: "{{ username_on_the_host.stdout }}"
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ item.public_dns_name }}"
port: 22
with_items: "{{ ec2_info.instances }}"
when: ec2_info|changed
- name: Add new instance to launched group
add_host:
hostname: "{{ item.public_dns_name }}"
groupname: launched
with_items: "{{ ec2_info.instances }}"
when: ec2_info|changed
- name: Get ec2_info information
debug:
msg: "{{ ec2_info }}"
# Configure and install all we need
- hosts: launched
remote_user: admin
gather_facts: true
tasks:
- name: Display all variables/facts known for a host
debug:
var: hostvars[inventory_hostname]
- name: List hosts
debug: msg="groups={{groups}}"
- name: Get current user
command: whoami
- name: Prepare system
become: yes
become_method: sudo
apt: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
The output I have is:
TASK [Get current user] ********************************************************
changed: [ec2-35-167-142-43.us-west-2.compute.amazonaws.com] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.006532", "end": "2017-01-09 14:53:55.806000", "rc": 0, "start": "2017-01-09 14:53:55.799468", "stderr": "", "stdout": "brianbruggeman", "stdout_lines": ["brianbruggeman"], "warnings": []}
TASK [Prepare system] **********************************************************
failed: [ec2-35-167-142-43.us-west-2.compute.amazonaws.com] (item=['software-properties-common', 'python-software-properties', 'devscripts', 'build-essential', 'libffi-dev', 'libssl-dev', 'vim']) => {"failed": true, "item": ["software-properties-common", "python-software-properties", "devscripts", "build-essential", "libffi-dev", "libssl-dev", "vim"], "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE"}
This should work.
- name: Create Ec2 Instances
hosts: localhost
connection: local
gather_facts: False
vars:
project_name: device-graph
ami_id: ami-59799439 # debian jessie 64-bit hvm
region: us-west-2
zone: "us-west-2c"
instance_size: "t2.nano"
tasks:
- name: Provision a set of instances
ec2:
key_name: my_key
group: ["SSH only"]
instance_type: "{{ instance_size }}"
image: "{{ ami_id }}"
wait: true
exact_count: 1
count_tag:
Name: "{{ project_name }}-{{ username.stdout }}-test"
Creator: "{{ username.stdout }}"
Project: "{{ project_name }}"
instance_tags:
Name: "{{ project_name }}-{{ username.stdout }}-test"
Creator: "{{ username.stdout }}"
Project: "{{ project_name }}"
register: ec2
- name: Add all instance public IPs to host group
add_host:
hostname: "{{ item.public_ip }}"
groups: launched_ec2_hosts
with_items: "{{ ec2.tagged_instances }}"
- name: configuration play
hosts: launched_ec2_hosts
user: admin
gather_facts: true
vars:
ansible_ssh_private_key_file: "~/.ssh/project-name.pem"
tasks:
- name: get the username running the deploy
shell: whoami
register: username