I am getting the undefined variable error while creating EC2 instance in AWS Cloud with Ansible playbook.
My Ansible version is 2.9.
My Playbook:
---
- name: Running AWS EC2 Play ...
hosts: localhost
connection: local
gather_facts: yes
tasks:
- include_vars: aws_vars.yml
- name: Setting up the Security Group for new instance
ec2_group:
name: "{{ aws_hostname }}-nsg"
description: Allowing Traffic on port 22 and 80
region: "{{ aws_region }}"
aws_secret_key: "{{ SecretKey }}"
aws_access_key: "{{ AccessKey }}"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: all
cidr_ip: 0.0.0.0/0
vpc_id: "{{ aws_vpc_id }}"
- name: Spinning up new aws ec2 instance
ec2:
aws_access_key: "{{ AccessKey }}"
aws_secret_key: "{{ SecretKey }}"
image: "{{ aws_image_id }}"
instance_type: "{{ aws_instance_type }}"
instance_tags: { "Name":"{{ aws_hostname }}", "Vendor": "Amazon" }
key_name: "{{ aws_instance_key }}"
assign_public_ip: no
count: 1
wait: yes
wait_timeout: 500
region: "{{ aws_region }}"
volumes:
- device_name: "{{ aws_device_name }}"
device_type: "{{ aws_device_type }}"
volume_size: "{{ aws_volume_size }}"
delete_on_termination: true
vpc_subnet_id: "{{ aws_subnet_id }}"
group: "{{ aws_hostname }}-nsg"
user_data: hostnamectl set-hostname "{{ aws_hostname }}"
register: ec2_result
- debug:
var: ec2_result
- name: Wait for SSH to come up
wait_for:
host: '{{ private_ip }}'
port: 22
delay: 60
timeout: 320
state: started
loop: '{{ ec2_result.instances }}'
- name: Add all instance public IPs to host group
add_host:
hostname: '{{ private_ip }}'
groups: ec2_hosts
loop: '{{ ec2_result.instances }}'
The error
TASK [Wait for SSH to come up] ******************************************************************************************************************************
task path: /home/user1/create_awsVM.yml:61
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'private_ip' is undefined\n\nThe error appears to be in '/home/user1/create_awsVM.yml': line 61, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Wait for SSH to come up\n ^ here\n"
}
PLAY RECAP **************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
What I also tried:
- name: Wait for SSH to come up
wait_for:
host: '{{ item.private_ip }}'
port: 22
delay: 60
timeout: 320
state: started
loop: '{{ ec2_result.instances }}'
I used item as item.private_ip as thought it might work with loop but no luck.
Any help will be much appreciated.
Related
I am trying to provision AWS infrastructure using ansible. My simplified playbook vpc.yml for illustration is as follow:
- hosts: localhost
connection: local
gather_facts: false
vars:
vpc_name: "Test VPC"
vpc_cidr_block: "10.0.0.0/16"
aws_region: "ap-east-1"
subnets:
test_net_1a:
az: "ap-east-1a"
cidr: "10.0.1.0/24"
test_net_1a:
az: "ap-east-1b"
cidr: "10.0.2.0/24"
tasks:
- name: Create VPC
ec2_vpc_net:
name: "{{ vpc_name }}"
cidr_block: "{{ vpc_cidr_block }}"
region: "{{ aws_region }}"
state: "present"
register: my_vpc
# Save VPC id in a new variable.
- name: Set VPC ID in variable
set_fact:
vpc_id: "{{ my_vpc.vpc.id }}"
- name: Create Subnets
ec2_vpc_subnet:
state: "present"
vpc_id: "{{ vpc_id }}"
cidr: "{{ item.value.cidr }}"
az: "{{ item.value.az }}"
region: "{{ aws_region }}"
resource_tags:
Name: "{{ item.key }}"
loop: "{{ subnets | dict2items }}"
Now I try to test my playbook with ansible-playbook vpc.yml --check. However the playbook would fail because with --check my_vpc would return:
"changed": true,
"failed": false
Apparently --check cannot be used to preview AWS provisioning changes using ansible, so how do I test my playbook during development without making any actual infrastructure changes?
The following ansible playbook runs fine, no error at all but the URL just don't resolve/load afterwards. If I use the public IP created for the instance, the page loads.
---
- name: Provision an EC2 Instance
hosts: local
remote_user: ubuntu
become: yes
connection: local
gather_facts: false
vars:
instance_type: t2.micro
security_group: "Web Subnet Security Group"
image: ami-0c5199d385b432989
region: us-east-1
keypair: demo-key
count: 1
vars_files:
- keys.yml
tasks:
- name: Create key pair using ouw own pubkey
ec2_key:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
name: demo-key
key_material: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
region: us-east-1
state: present
- name: Launch the new EC2 Instance
ec2:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
assign_public_ip: yes
vpc_subnet_id: subnet-0c799bda2a466f8d4
group: "{{ security_group }}"
instance_type: "{{ instance_type}}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
keypair: "{{ keypair }}"
count: "{{ count }}"
state: present
register: ec2
- name: Add tag to Instance(s)
ec2_tag:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
resource: "{{ item.id }}"
region: "{{ region }}"
state: present
tags:
Name: demo-webserver
with_items: "{{ ec2.instances }}"
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
lineinfile:
path="./hosts"
line="{{ item.public_ip }}"
insertafter='\[demo-webserver\]'
state=present
with_items: "{{ ec2.instances }}"
- name: Pause for 2 minutes
pause:
minutes: 2
- name: Write the new ec2 instance host key to known hosts
connection: local
shell: "ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts"
with_items: "{{ ec2.instances }}"
- name: Waiting for the instance to come
local_action: wait_for
host="{{ item.public_ip }}"
delay=10
connect_timeout=300
state=started
port=22
with_items: "{{ ec2.instances }}"
- name: Install packages
delegate_to: "{{ item.public_ip }}"
raw: bash -c "test -e /usr/bin/python || (apt -qqy update && apt install -qqy python-minimal && apt install -qqy apache2 && systemctl start apache2 && systemctl enable apache2)"
with_items: "{{ ec2.instances }}"
- name: Register new domain
route53_zone:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
- name: Create new DNS record
route53:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
record: ansible-demo-domain.com
type: A
ttl: 300
value: "{{ item.public_ip }}"
state: present
overwrite: yes
private_zone: no
wait: yes
with_items: "{{ ec2.instances }}"
- name: Create new DNS record
route53:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
record: www.ansible-demo-domain.com
type: CNAME
ttl: 300
value: ansible-demo-domain.com
state: present
overwrite: yes
private_zone: no
wait: yes
Appreciate your help to point what/where I'm missing is. I usually wait at least 5 minutes before testing the URL but really doens't resolve/load.
Thank you!
20190301_Update: Here's how the hosted zone looks like after provisioning:
hosted-zone-after-provisioning and its associated TTLs ttl
This playbook appears to be SSHing onto my local machine rather than the remote one. This condition is guessed based on the output I've included at the bottom.
I've adapted the example from here: http://docs.ansible.com/ansible/guide_aws.html#provisioning
The playbook is split into two plays:
creation of the EC2 instance and
configuration of the EC2 instance
Note: To run this you'll need to create a key-pair with the same name as the project (you can get more information here: https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#KeyPairs:sort=keyName)
The playbook is listed below:
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
vars:
project_name: my-test
tasks:
- name: Get the current username
local_action: command whoami
register: username_on_the_host
- name: Capture current instances
ec2_remote_facts:
region: "us-west-2"
register: ec2_instances
- name: Create instance
ec2:
region: "us-west-2"
zone: "us-west-2c"
keypair: "{{ project_name }}"
group:
- "SSH only"
instance_type: "t2.nano"
image: "ami-59799439" # debian:jessie amd64 hvm on us-west 2
count_tag: "{{ project_name }}-{{ username_on_the_host.stdout }}-test"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ project_name }}-{{ username_on_the_host.stdout }}-test"
"{{ project_name }}-{{ username_on_the_host.stdout }}-test": simple_ec2
Creator: "{{ username_on_the_host.stdout }}"
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ item.public_dns_name }}"
port: 22
with_items: "{{ ec2_info.instances }}"
when: ec2_info|changed
- name: Add new instance to launched group
add_host:
hostname: "{{ item.public_dns_name }}"
groupname: launched
with_items: "{{ ec2_info.instances }}"
when: ec2_info|changed
- name: Get ec2_info information
debug:
msg: "{{ ec2_info }}"
# Configure and install all we need
- hosts: launched
remote_user: admin
gather_facts: true
tasks:
- name: Display all variables/facts known for a host
debug:
var: hostvars[inventory_hostname]
- name: List hosts
debug: msg="groups={{groups}}"
- name: Get current user
command: whoami
- name: Prepare system
become: yes
become_method: sudo
apt: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
The output I have is:
TASK [Get current user] ********************************************************
changed: [ec2-35-167-142-43.us-west-2.compute.amazonaws.com] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.006532", "end": "2017-01-09 14:53:55.806000", "rc": 0, "start": "2017-01-09 14:53:55.799468", "stderr": "", "stdout": "brianbruggeman", "stdout_lines": ["brianbruggeman"], "warnings": []}
TASK [Prepare system] **********************************************************
failed: [ec2-35-167-142-43.us-west-2.compute.amazonaws.com] (item=['software-properties-common', 'python-software-properties', 'devscripts', 'build-essential', 'libffi-dev', 'libssl-dev', 'vim']) => {"failed": true, "item": ["software-properties-common", "python-software-properties", "devscripts", "build-essential", "libffi-dev", "libssl-dev", "vim"], "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE"}
This should work.
- name: Create Ec2 Instances
hosts: localhost
connection: local
gather_facts: False
vars:
project_name: device-graph
ami_id: ami-59799439 # debian jessie 64-bit hvm
region: us-west-2
zone: "us-west-2c"
instance_size: "t2.nano"
tasks:
- name: Provision a set of instances
ec2:
key_name: my_key
group: ["SSH only"]
instance_type: "{{ instance_size }}"
image: "{{ ami_id }}"
wait: true
exact_count: 1
count_tag:
Name: "{{ project_name }}-{{ username.stdout }}-test"
Creator: "{{ username.stdout }}"
Project: "{{ project_name }}"
instance_tags:
Name: "{{ project_name }}-{{ username.stdout }}-test"
Creator: "{{ username.stdout }}"
Project: "{{ project_name }}"
register: ec2
- name: Add all instance public IPs to host group
add_host:
hostname: "{{ item.public_ip }}"
groups: launched_ec2_hosts
with_items: "{{ ec2.tagged_instances }}"
- name: configuration play
hosts: launched_ec2_hosts
user: admin
gather_facts: true
vars:
ansible_ssh_private_key_file: "~/.ssh/project-name.pem"
tasks:
- name: get the username running the deploy
shell: whoami
register: username
I have managed to successfully create an ELB using this playbook:
- name: Create VPC network
ec2_elb_lb:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
name: "ElasticLoadBalancer"
region: us-east-1
state: present
subnets: "{{ Subnet.SubnetId }}"
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
register: elb
- debug: msg="{{ elb }}"
But I also need to add HTTPS inbound and HTTP outbound, so I added an extra listener as per the ec2_elb_lb module example:
- name: Create VPC network
ec2_elb_lb:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
name: "ElasticLoadBalancer"
region: us-east-1
state: present
subnets: "{{ Subnet.SubnetId }}"
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
- protocol: https
load_balancer_port: 443
instance_protocol: http
instance_port: 80
register: elb
- debug: msg="{{ elb }}"
After running the above playbook I get the following message:
failed: [localhost] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1448959476.82-159664399600608/ec2_elb_lb", line 2519, in <module>
main()
File "/root/.ansible/tmp/ansible-tmp-1448959476.82-159664399600608/ec2_elb_lb", line 693, in main
elb_man.ensure_ok()
File "/root/.ansible/tmp/ansible-tmp-1448959476.82-159664399600608/ec2_elb_lb", line 292, in ensure_ok
self._create_elb()
File "/root/.ansible/tmp/ansible-tmp-1448959476.82-159664399600608/ec2_elb_lb", line 397, in _create_elb
scheme=self.scheme)
File "/usr/lib/python2.7/site-packages/boto/ec2/elb/__init__.py", line 230, in create_load_balancer
params['Listeners.member.%d.SSLCertificateId' % i] = listener[4]
IndexError: tuple index out of range
FATAL: all hosts have already failed -- aborting
ansible --version
ansible 1.9.4
If you want to provide HTTPS on the ELB then you need to provide an SSL certificate as well.
So your ec2_elb_lb task should instead look like:
- name: Create VPC network
ec2_elb_lb:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
name: "ElasticLoadBalancer"
region: us-east-1
state: present
subnets: "{{ Subnet.SubnetId }}"
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
- protocol: https
load_balancer_port: 443
instance_protocol: http
instance_port: 80
ssl_certificate_id: "arn:aws:iam::123456789012:server-certificate/company/servercerts/ProdServerCert"
register: elb
- debug: msg="{{ elb }}"
I am creating a new instance with ansible and want to associate an elastic ip to it. What value should i write in instance_id? instance_id: "{{ newinstance.instances[0].id }}" ??? But this value seems to be wrong, because i have an output after checking:
TASK: [Allocating elastic IP to instance] *************************************
fatal: [localhost] => One or more undefined variables: 'dict object' has no attribute 'instances'
---
- name: Setup an EC2 instance
hosts: localhost
connection: local
tasks:
- name: Create an EC2 machine
ec2:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
key_name: my_key
instance_type: t1.micro
region: us-east-1
image: some_ami
wait: yes
vpc_subnet_id: my_subnet
assign_public_ip: yes
register: newinstance
- name: Allocating elastic IP to instance
ec2_eip:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
in_vpc: yes
reuse_existing_ip_allowed: yes
state: present
region: us-east-1
instance_id: "{{ newinstance.instances[0].id }}"
register: instance_eip
- debug: var=instance_eip.public_ip
- name: Wait for SSH to start
wait_for:
host: "{{ newinstance.instances[0].private_ip }}"
port: 22
timeout: 300
sudo: false
delegate_to: "127.0.0.1"
- name: Add the machine to the inventory
add_host:
hostname: "{{ newinstance.instances[0].private_ip }}"
groupname: new
What should i put instead "{{ newinstance.instances[0].id }}"? The same question is about "{{ newinstance.instances[0].private_ip }}".
You are basically trying to parse data from the JSON output of Ansible task which is given to your variable. instance_ids is an array and child of newinstance JSON. Similarly private_ip is a direct child of newinstance
---
- name: Setup an EC2 instance
hosts: localhost
connection: local
tasks:
- name: Create an EC2 machine
ec2:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
key_name: my_key
instance_type: t1.micro
region: us-east-1
image: some_ami
wait: yes
vpc_subnet_id: my_subnet
assign_public_ip: yes
register: newinstance
- name: Allocating elastic IP to instance
ec2_eip:
aws_access_key: my_access_key
aws_secret_key: my_secret_key
in_vpc: yes
reuse_existing_ip_allowed: yes
state: present
region: us-east-1
instance_id: "{{ newinstance.instance_ids[0] }}"
register: instance_eip
- debug: var=instance_eip.public_ip
- name: Wait for SSH to start
wait_for:
host: "{{ newinstance.private_ip }}"
port: 22
timeout: 300
sudo: false
delegate_to: "127.0.0.1"
- name: Add the machine to the inventory
add_host:
hostname: "{{ newinstance.private_ip }}"
groupname: new