Deploy Subnets into different availability zones using Ansible in AWS - amazon-web-services

I am Working on putting subnets into different availability zones in AWS with the help of ansible. I want to put subnet1 into us-east-1a, then subnet2 into us-east-1b and so on. Currently I can only be able to put servers onto us-east-1a only. Here is the ansible scripts.
---
- name: Create AWS VPC and Subnets
hosts: localhost
connection: local
gather_facts: false
vars:
region: us-east-1
prefix: mahela_ansible
az1: us-east-1a
az2: us-east-1b
az3: us-east-1c
tasks:
- name: Create VPC
local_action:
module: ec2_vpc
region: "{{ region }}"
cidr_block: 10.123.0.0/16
resource_tags: '{"Name":"{{ prefix }}"}'
subnets:
- name: Cassandra Subnet
cidr: 10.123.0.0/24
az: "{{ az1 }}"
resource_tags: '{"Name":"{{ prefix }}_cassandra"}'
- name: MongoDB Subnet
cidr: 10.123.1.0/24
az: "{{ az2 }}"
resource_tags: '{"Name":"{{ prefix }}_Mongodb"}'
- name: Elastic Search
cidr: 10.123.2.0/24
az: "{{ az3 }}"
resource_tags: '{"Name":"{{ prefix }}_elasticsearch"}'

That example might help you.
roles/vpc/defaults/main.yml file look like this:
---
# Variables that can provide as extra vars
VPC_NAME: test
VPC_REGION: us-east-1 # N.Virginia
VPC_CIDR: "172.25.0.0/16"
VPC_CLASS_DEFAULT: "172.25"
# Variables for VPC
vpc_name: "{{ VPC_NAME }}"
vpc_region: "{{ VPC_REGION }}"
vpc_cidr_block: "{{ VPC_CIDR }}"
public_cidr_1: "{{ VPC_CLASS_DEFAULT }}.10.0/24"
public_az_1: "{{ vpc_region }}a"
public_cidr_2: "{{ VPC_CLASS_DEFAULT }}.20.0/24"
public_az_2: "{{ vpc_region }}b"
private_cidr_1: "{{ VPC_CLASS_DEFAULT }}.30.0/24"
private_az_1: "{{ vpc_region }}a"
private_cidr_2: "{{ VPC_CLASS_DEFAULT }}.40.0/24"
private_az_2: "{{ vpc_region }}b"
# Please don't change the variables below, until you know what you are doing
#
# Subnets Defination for VPC
vpc_subnets:
- cidr: "{{ public_cidr_1 }}" # Public Subnet-1
az: "{{ public_az_1 }}"
resource_tags: { "Name":"{{ vpc_name }}-{{ public_az_1 }}-public_subnet-1", "Type":"Public", "Alias":"Public_Subnet_1" }
- cidr: "{{ public_cidr_2 }}" # Public Subnet-2
az: "{{ public_az_2 }}"
resource_tags: { "Name":"{{ vpc_name }}-{{ public_az_2 }}-public-subnet-2", "Type":"Public", "Alias":"Public_Subnet_2" }
- cidr: "{{ private_cidr_1 }}" # Private Subnet-1
az: "{{ private_az_1 }}"
resource_tags: { "Name":"{{ vpc_name }}-{{ private_az_1 }}-private-subnet-1", "Type":"Private", "Alias":"Private_Subnet_1" }
- cidr: "{{ private_cidr_2 }}" # Private Subnet-2
az: "{{ private_az_2 }}"
resource_tags: { "Name":"{{ vpc_name }}-{{ private_az_2 }}-private-subnet-2", "Type":"Private", "Alias":"Private_Subnet_2" }
Then roles/vpc/tasks/main.yml file will be like this:
---
- name: Creating an AWS VPC inside mentioned Region
ec2_vpc:
region: "{{ vpc_region }}"
state: present
cidr_block: "{{ vpc_cidr_block }}"
resource_tags: { "Name":"{{ vpc_name }}-vpc", "Environment":"{{ ENVIRONMENT }}" }
subnets: "{{ vpc_subnets }}"
internet_gateway: yes
register: vpc
- name: Tag the Internet Gateway
ec2_tag:
resource: "{{ vpc.igw_id }}"
region: "{{ vpc_region }}"
state: present
tags:
Name: "{{ vpc_name }}-igw"
register: igw
- name: Set up Public Subnets Route Table
ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc_id }}"
region: "{{ vpc_region }}"
state: present
tags:
Name: "Public-RT-for-{{ vpc_name }}-vpc"
subnets:
"{{ vpc.subnets | get_public_subnets_ids('Type','Public') }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ vpc.igw_id }}"
register: public_rt
For complete reference, take a look at this github repo.
Hope it help you or others.

Related

Using Ansible playbook to create instances in google cloud (gcp)

I'm using the following code
- name: create a instance
gcp_compute_instance:
name: test_object
machine_type: n1-standard-1
disks:
- auto_delete: 'false'
boot: 'true'
source: "{{ disk }}"
metadata:
startup-script-url:
cost-center:
labels:
environment: production
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
zone: us-central1-a
project: test-12y38912634812648
auth_kind: serviceaccount
service_account_file: "~/programming/gcloud/test-1283891264812-8h3981f3.json"
state: present
and I saved the file as create2.yml
Then I run Ansible-playbook create2.yml and I get the following error
ERROR! 'gcp_compute_instance' is not a valid attribute for a Play
The error appears to be in '/Users/xxx/programming/gcloud-test/create2.yml': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: create a instance
^ here
I followed the documentation. What am I doing wrong and how do I fix it?
You haven't created a playbook, you've just created a file with a task which won't run on it's own as you've discovered.
A playbook is a collection of tasks. You should start with the playbook documentation:
Playbook Documentation
For GCP, here's a working example to create a network, external IP, disk and VM.
- name: 'Deploy gcp vm'
hosts: localhost
connection: local
become: false
gather_facts: no
vars:
gcp_project: "671245944514"
gcp_cred_kind: "serviceaccount"
gcp_cred_file: "/tmp/test-project.json"
gcp_region: "us-central1"
gcp_zone: "us-central1-a"
# Roles & Tasks
tasks:
- name: create a disk
gcp_compute_disk:
name: disk-instance
size_gb: 50
source_image: projects/ubuntu-os-cloud/global/images/family/ubuntu-2004-lts
zone: "{{ gcp_zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: disk
- name: create a network
gcp_compute_network:
name: network-instance
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: network
- name: create a address
gcp_compute_address:
name: address-instance
region: "{{ gcp_region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: address
- name: create a instance
gcp_compute_instance:
name: vm-instance
project: "{{ gcp_project }}"
zone: "{{ gcp_zone }}"
machine_type: n1-standard-1
disks:
- auto_delete: 'true'
boot: 'true'
source: "{{ disk }}"
labels:
environment: testing
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
auth_kind: serviceaccount
service_account_file: "{{ gcp_cred_file }}"
state: present

Not able to Delete AWS VPC using ansible playbook

`---
- hosts: localhost
gather_facts: true
vars_files:
- group_vars/delete-vpc.yml
vars:
region: ap-south-1
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
tasks:
- name: delete the vpc
ec2_vpc_net:
name: test-vpc
cidr_block: 10.22.0.0/16
region: ap-south-1
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
profile: "{{ build_env.profile }}"
state: absent
tenancy: dedicated
purge_cidrs: yes
register: vpc_delete`
---
- hosts: localhost
gather_facts: true
vars_files:
- group_vars/delete-vpc.yml
vars:
region: ap-south-1
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
tasks:
- name: delete the vpc
ec2_vpc_net:
name: test-vpc
cidr_block: 10.22.0.0/16
region: ap-south-1
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
state: absent
purge_cidrs: yes
register: vpc_delete

Developing AWS infrastructure provisioning ansible playbook using check / dry-run

I am trying to provision AWS infrastructure using ansible. My simplified playbook vpc.yml for illustration is as follow:
- hosts: localhost
connection: local
gather_facts: false
vars:
vpc_name: "Test VPC"
vpc_cidr_block: "10.0.0.0/16"
aws_region: "ap-east-1"
subnets:
test_net_1a:
az: "ap-east-1a"
cidr: "10.0.1.0/24"
test_net_1a:
az: "ap-east-1b"
cidr: "10.0.2.0/24"
tasks:
- name: Create VPC
ec2_vpc_net:
name: "{{ vpc_name }}"
cidr_block: "{{ vpc_cidr_block }}"
region: "{{ aws_region }}"
state: "present"
register: my_vpc
# Save VPC id in a new variable.
- name: Set VPC ID in variable
set_fact:
vpc_id: "{{ my_vpc.vpc.id }}"
- name: Create Subnets
ec2_vpc_subnet:
state: "present"
vpc_id: "{{ vpc_id }}"
cidr: "{{ item.value.cidr }}"
az: "{{ item.value.az }}"
region: "{{ aws_region }}"
resource_tags:
Name: "{{ item.key }}"
loop: "{{ subnets | dict2items }}"
Now I try to test my playbook with ansible-playbook vpc.yml --check. However the playbook would fail because with --check my_vpc would return:
"changed": true,
"failed": false
Apparently --check cannot be used to preview AWS provisioning changes using ansible, so how do I test my playbook during development without making any actual infrastructure changes?

Ansible AWS Route53 URL doesn't resolve but page loads OK via IP

The following ansible playbook runs fine, no error at all but the URL just don't resolve/load afterwards. If I use the public IP created for the instance, the page loads.
---
- name: Provision an EC2 Instance
hosts: local
remote_user: ubuntu
become: yes
connection: local
gather_facts: false
vars:
instance_type: t2.micro
security_group: "Web Subnet Security Group"
image: ami-0c5199d385b432989
region: us-east-1
keypair: demo-key
count: 1
vars_files:
- keys.yml
tasks:
- name: Create key pair using ouw own pubkey
ec2_key:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
name: demo-key
key_material: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
region: us-east-1
state: present
- name: Launch the new EC2 Instance
ec2:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
assign_public_ip: yes
vpc_subnet_id: subnet-0c799bda2a466f8d4
group: "{{ security_group }}"
instance_type: "{{ instance_type}}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
keypair: "{{ keypair }}"
count: "{{ count }}"
state: present
register: ec2
- name: Add tag to Instance(s)
ec2_tag:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
resource: "{{ item.id }}"
region: "{{ region }}"
state: present
tags:
Name: demo-webserver
with_items: "{{ ec2.instances }}"
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
lineinfile:
path="./hosts"
line="{{ item.public_ip }}"
insertafter='\[demo-webserver\]'
state=present
with_items: "{{ ec2.instances }}"
- name: Pause for 2 minutes
pause:
minutes: 2
- name: Write the new ec2 instance host key to known hosts
connection: local
shell: "ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts"
with_items: "{{ ec2.instances }}"
- name: Waiting for the instance to come
local_action: wait_for
host="{{ item.public_ip }}"
delay=10
connect_timeout=300
state=started
port=22
with_items: "{{ ec2.instances }}"
- name: Install packages
delegate_to: "{{ item.public_ip }}"
raw: bash -c "test -e /usr/bin/python || (apt -qqy update && apt install -qqy python-minimal && apt install -qqy apache2 && systemctl start apache2 && systemctl enable apache2)"
with_items: "{{ ec2.instances }}"
- name: Register new domain
route53_zone:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
- name: Create new DNS record
route53:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
record: ansible-demo-domain.com
type: A
ttl: 300
value: "{{ item.public_ip }}"
state: present
overwrite: yes
private_zone: no
wait: yes
with_items: "{{ ec2.instances }}"
- name: Create new DNS record
route53:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
record: www.ansible-demo-domain.com
type: CNAME
ttl: 300
value: ansible-demo-domain.com
state: present
overwrite: yes
private_zone: no
wait: yes
Appreciate your help to point what/where I'm missing is. I usually wait at least 5 minutes before testing the URL but really doens't resolve/load.
Thank you!
20190301_Update: Here's how the hosted zone looks like after provisioning:
hosted-zone-after-provisioning and its associated TTLs ttl

Can't figure out why subnet is being updated

I am creating a VPC in AWS using Ansible. The following play is run
- name: create vpc with multi-az subnets
ec2_vpc:
region: "{{ region }}"
cidr_block: "{{ vpc_cidr_block }}"
resource_tags: '{"Name":"{{ prefix }}_vpc"}'
subnets:
- cidr: "{{ vpc_cidr_subnet_public_0 }}"
az: "{{ region }}{{ availability_zone_0 }}"
resource_tags: '{"Name":"{{ prefix }}_subnet_public_0", "Class":"web", "Partner":prefix }'
- cidr: "{{ vpc_cidr_subnet_private_0 }}"
az: "{{ region }}{{ availability_zone_0 }}"
resource_tags: '{"Name":"{{ prefix }}_subnet_private_0", "Class":"db", "Partner":prefix }'
- cidr: "{{ vpc_cidr_subnet_private_1 }}"
az: "{{ region }}{{ availability_zone_1 }}"
resource_tags: '{"Name":"{{ prefix }}_subnet_private_1", "Class":"db", "Partner":prefix }'
internet_gateway: yes
route_tables:
- subnets:
- "{{ vpc_cidr_subnet_public_0 }}"
routes:
- dest: 0.0.0.0/0
gw: igw
wait: yes
register: vpc
First time around this creates everything perfectly. Second time around, I expect it to not do anything as everything has been created, however, the public subnet is updated to a private one.
Why? What am I doing wrong?
[UPDATE]
Here are the variables:
---
region: eu-west-1
prefix: staging
vpc_environment: staging
vpc_cidr_block: 20.0.0.0/16
vpc_cidr_subnet_public_0: 20.0.0.0/24
vpc_cidr_subnet_private_0: 20.0.1.0/24
vpc_cidr_subnet_private_1: 20.0.2.0/24
availability_zone_0: b
availability_zone_1: c
Also just to clarify on what change is happening. All the resource tags of the one subnet (public) are being overwritten with the tags of another subnet (private).
This was caused by a bug in ansible-modules-core in master - ec2_vpc. I have logged a bug and created a PR to resolve the issue. See PR for details and the actual break. Hopefully it gets merged soon!
[UPDATE] Merged