I want to create two hosted zones 1 private and 1 public hosted zones. I have already created a vpc. Somehow when I run my ansible script it only creates 1 hosted zone. If the public zone task is first it will create a public zone and if the private zone task is first it will create a private zone only. I don't know if there is a bug in the module or I am doing something wrong.
---
- name: Create private hosted Zone
route53_zone:
zone: "{{ private_hosted_zone_name }}"
state: present
vpc_id: "{{ vpc_id }}"
vpc_region: "{{ vpc_region }}"
register: private_hosted_zone
- name: Print private zone id
debug:
msg: "{{ private_hosted_zone.set.zone_id }}"
- name: Set private zone ID in a variable
set_fact:
private_zone_id: "{{ private_hosted_zone.set.zone_id }}"
- name: Create public hosted Zone
route53_zone:
zone: "{{ public_hosted_zone_name }}"
state: present
register: public_hosted_zone
- name: Print public zone id
debug:
msg: "{{ public_hosted_zone.set.zone_id }}"
- name: Set public zone ID in a variable
set_fact:
public_zone_id: "{{ public_hosted_zone.set.zone_id }}"
My code for hosted zone is here:
---
- name: Create private hosted Zone
route53_zone:
zone: "{{ private_hosted_zone_name }}"
state: present
vpc_id: "{{ vpc_id }}"
vpc_region: "{{ vpc_region }}"
register: private_hosted_zone
- name: Print private zone id
debug:
msg: "{{ private_hosted_zone.set.zone_id }}"
- name: Set private zone ID in a variable
set_fact:
private_zone_id: "{{ private_hosted_zone.set.zone_id }}"
- name: Create public hosted Zone
route53_zone:
zone: "{{ public_hosted_zone_name }}"
state: present
register: public_hosted_zone
- name: Print public zone id
debug:
msg: "{{ public_hosted_zone.set.zone_id }}"
- name: Set public zone ID in a variable
set_fact:
public_zone_id: "{{ public_hosted_zone.set.zone_id }}"
Any help will be highly appreciated.
I managed to resolve this issue by upgrading Ansible from version 2.3 to version 2.4. Hope this will help someone else too.
I think there’s an issue with your yaml. I’d use yamllint to make sure it’s standard then try again. Notably, your indentations look inconsistent.
Related
For cost reasons, our ASG's in the QA environment run with desired/min/max capacity set to "1". That's not the case for Production but since we use the same code for QA and Prod deployment (minus a few variables of course) this is causing problems with the QA automation jobs.
- name: create autoscale groups original_lc
ec2_asg:
name: "{{ app_name }}"
target_group_arns: "{{alb_target_group_facts.target_groups[0].target_group_arn}}"
launch_config_name: "{{ launch_config.name }}"
min_size: 1
max_size: 1
desired_capacity: 1
region: "{{ region }}"
vpc_zone_identifier: "{{ subnets | join(',') }}"
health_check_type: "{{health_check}}"
replace_all_instances: yes
wait_for_instances: false
replace_batch_size: '{{ rollover_size }}'
lc_check: false
default_cooldown: "{{default_cooldown}}"
health_check_period: "{{health_check_period}}"
notification_topic: "{{redeem_notification_group}}"
tags:
- Name : "{{ app_name }}"
- Application: "{{ tag_Application }}"
- Product_Owner: "{{ tag_Product_Owner }}"
- Resource_Owner: "{{ tag_Resource_Owner }}"
- Role: "{{ tag_Role }}"
- Service_Category: "{{ tag_Service_Category }}"
register: asg_original_lc
On the first run, the "ec2_asg" module creates the group properly, with the correct desired/min/max settings.
But when we run the job a second time to update the same ASG, it changes desired/min/max to "2" in AWS. We don't want that. We just want it to rotate out that one instance in the group. Is there a way to achieve that?
today I am trying to assign multiple elastic IPs to multiple different private IP addresses.
NOTE: IP addresses are fake, I've wrote them like this to make you understand what I'd like to do.
This is what I have achieved so far:
Server 1
Elastic Network Interface
Private IP 172.x.x.1 (public IP: 55.x.x.1)
Secondary Private IP
172.x.x.2 (public IP: NONE SET)
172.x.x.3 (public IP: NONE SET)
Server 2
Elastic Network Interface
Private IP 174.x.x.1 (public IP: 57.x.x.1)
Secondary Private IP
174.x.x.2 (public IP: NONE SET)
174.x.x.3 (public IP: NONE SET)
This is what I am trying to achieve:
Server 1
Elastic Network Interface
Private IP 172.x.x.1 (public IP: 55.x.x.1)
Secondary Private IP
172.x.x.2 (public IP: 55.x.x.2)
172.x.x.3 (public IP: 55.x.x.3)
Server 2
Elastic Network Interface
Private IP 174.x.x.1 (public IP: 57.x.x.1)
Secondary Private IP
174.x.x.2 (public IP: 57.x.x.2)
174.x.x.3 (public IP: 57.x.x.3)
Here's the Ansible playbook I wrote so far:
{{ platform }} is an extra var passed through CLI.
- name: Provision a set of Edge instances
ec2:
key_name: ec2user
group: "launch-wizard-1"
instance_type: "m4.2xlarge"
image: "ami-xxxxxxx"
region: "eu-west-1"
wait: true
exact_count: 2
count_tag:
PlatformName: "{{ platform }}"
Role: Edge
instance_tags:
Name: "{{ platform }}::Edge"
PlatformName: "{{ platform }}"
Role: Edge
LongName: "Edge server for {{ platform }}'s platform"
Groups: common,server,rabbitmq,memcache,stats,cache-manager,media,content,icecast
register: edge_ec2
- name: Find ENIs created for Edge instances
ec2_eni_facts:
region: "eu-west-1"
filters:
attachment.instance-id: "{{ item }}"
with_items: "{{ edge_ec2.instance_ids }}"
register: edge_enis
- name: Adds an additional private IP to the Edge ENIs
ec2_eni:
region: "eu-west-1"
eni_id: "{{ item.interfaces[0].id }}"
subnet_id: "{{ item.interfaces[0].subnet_id }}"
state: present
secondary_private_ip_address_count: 2
register: created_ips
with_items: "{{ edge_enis.results }}"
- name: Adds additional elastic IPs to the Edge ENIs
ec2_eip:
device_id: "{{ item.0.interface.attachment.instance_id }}"
region: "eu-west-1"
private_ip_address: "{{ item.1.private_ip_address }}"
in_vpc: true
register: eips
with_subelements:
- "{{ created_ips.results }}"
- interface.private_ip_addresses
Why doesn't ansible assign the new allocated elastic IPs to the secondary private IPs, but only to the primary one, even though I specifically tell to assign it to the secondary private IPs?
I don't know why Ansible ec2_eip module fails to create a new elastic IP and assign it to the specified secondary private IP address, but I did discover a workaround.
I experienced the same problem — it seemed to me that the private_ip_address option was being ignored. Here's a snippet of my code that didn't work:
- name: try to create an EIP and associate it in one pass
ec2_eip:
device_id: "{{ item.network_interfaces[0].network_interface_id }}"
private_ip_address: "{{ item.network_interfaces[0].private_ip_addresses[1].private_ip_address }}"
region: ap-southeast-2
in_vpc: true
with_items: "{{ servers.results[0].instances }}"
My workaround was to pre-create the elastic IPs and then associate them with the private IPs in a separate task.
- name: "allocate a new elastic IP for each server without associating it with anything"
ec2_eip:
state: 'present'
region: ap-southeast-2
register: eip2
with_items: "{{ servers.results[0].instances }}"
- name: Associate elastic IPs with the first interface attached to each instance
ec2_eip:
public_ip: "{{ eip2.results[item.0].public_ip }}"
device_id: "{{ item.1.network_interfaces[0].network_interface_id }}"
private_ip_address: "{{ item.1.network_interfaces[0].private_ip_addresses[1].private_ip_address }}"
region: ap-southeast-2
in_vpc: true
with_indexed_items: "{{ servers.results[0].instances }}"
My key finding was that if you supply public_ip then the private_ip_address option starts being recognised.
As the title states, I am trying to prevent an instance from selecting the same AZ twice in a row. My current role is setup to rotate based on available ips. It works fine but when I run multiple servers, it keeps going to the same AZ. I need to find a way to prevent it from selecting the same AZ twice in a row.
This is a role that is called during my overall server build
#gather vpc and subnet facts to determine where to build the server
- name: gather subnet facts
ec2_vpc_subnet_facts:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
aws_region: "{{ aws_region }}"
register: ec2_subnets
- name: initialize build subnets
set_fact:
build_subnet: ""
free_ips: 0
previous_subnet_az: ""
- name: pick usable ec2_subnets
set_fact:
# Item.id is sets the current builds subnet Id
build_subnet: "{{ item.id }}"
free_ips: "{{ item.available_ip_address_count|int }}"
# Just for debugging and does not work
previous_subnet_az: " {{ item.availability_zone }}"
when: ("{{ item.available_ip_address_count|int }}" > free_ips) and ("ansible_subnet" in "{{ item.tags }}") and ("{{ previous_subnet_az|string }}" != "{{ item.availability_zone|string }}")
# Each subnet in the list
with_items: '{{ec2_subnets.subnets}}'
register: build_subnets
- debug: var=build_subnets var=build_subnet var=previous_subnet_az
var=selected_subnet
Created a play to set the previous subnet when null. Then did a basic conditional that set the fact of the previous subnet once finishing the first iterations. Its now solved, thanks everyone.
Promote command does not seem to work on the version of Ansible that I am using.
So I am trying to create a new database as a replica of an existing one and after making it master, delete the source database.
I was trying to do it like this:
Make replica
Promote replica
Delete source database
But now I am thinking of this:
Create new database from source database last snapshot [as master from the beginning]
Delete the source database
How would that playbook go?
My playbook:
- hosts: localhost
vars:
source_db_name: "{{ SOURCE_DB }}" # stagingdb
new_db_name: "{{ NEW_DB }}" # stagingdb2
tasks:
- name: Make RDS replica
local_action:
module: rds
region: us-east-1
command: replicate
instance_name : "{{ new_db_name }}"
source_instance: "{{ source_db_name }}"
wait: yes
wait_timeout: 900 # wait 15 minutes
# Notice - not working [Ansible bug]
- name: Promote RDS replica
local_action:
module: rds
region: us-east-1
command: promote
instance_name: "{{ new_db_name }}" # stagingdb2
backup_retention: 0
wait: yes
wait_timeout: 300
- name: Delete source db
local_action:
command: delete
instance_name: "{{ source_db_name }}"
region: us-east-1
You just need to use the restore command in the RDS module.
Your playbook would then look something like:
- hosts: localhost
connection: local
gather_facts: yes
vars:
date: "{{ ansible_date_time.year }}-{{ ansible_date_time.month }}-{{ ansible_date_time.day }}-{{ ansible_date_time.hour }}-{{ ansible_date_time.minute }}"
source_db_name: "{{ SOURCE_DB }}" # stagingdb
new_db_name: "{{ NEW_DB }}" # stagingdb2
snapshot_name: "snapshot-{{ source_db_name }}--{{ date }}"
tasks:
- name : Take RDS snapshot
rds :
command : snapshot
instance_name : "{{ source_db_name }}"
snapshot : "{{ snapshot_name }}"
wait : yes
register: snapshot_out
- name : get facts
rds :
command : facts
instance_name : "{{ instance_name }}"
register: db_facts
- name : Restore RDS from snapshot
rds :
command : restore
instance_name : "{{ new_db_name }}"
snapshot : "{{ snapshot_name }}"
instance_type : "{{ db_facts.instance.instance_type }}"
subnet : primary # Unfortunately this isn't returned by db_facts
wait : yes
wait_timeout : 1200
- name : Delete source db
rds :
command : delete
instance_name : "{{ source_db_name }}"
There's a couple of extra tricks in there:
I set connection to local at the start of the play so, when combined with hosts: localhost all of the tasks will be local tasks.
I build a date time stamp that looks like YYYY-mm-dd-hh-mm from the Ansible host's own facts (from gather_facts and it only targeting localhost). This is then used for the snapshot name to make sure that we create it (if one exists with the same name then Ansible won't create another snapshot - something that could be bad in this case as it would use an older snapshot before deleting your source database).
I fetch the facts about the RDS instance in a task and use that to set the instance type to be the same as the source database. If you don't want that then you can define the instance_type directly and remove the whole get facts task
I am trying to use Ansible to create an EC2 instance, configure a web server and then register it to a load balancer. I have no problem creating the EC2 instance, nor configuring the web server but all attempts to register it against an existing load balancer fail with varying errors depending on the code I use.
Has anyone had success in doing this?
Here are the links to the Ansible documentation for the ec2 and ec2_elb modules:
http://docs.ansible.com/ec2_module.html
http://docs.ansible.com/ec2_elb_module.html
Alternatively, if it is not possible to register the EC2 instance against the ELB post creation, I would settle for another 'play' that collects all EC2 instances with a certain name and loops through them, adding them to the ELB.
Here's what I do that works:
- name: Add machine to elb
local_action:
module: ec2_elb
aws_access_key: "{{lookup('env', 'AWS_ACCESS_KEY')}}"
aws_secret_key: "{{lookup('env', 'AWS_SECRET_KEY')}}"
region: "{{ansible_ec2_placement_region}}"
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{elb_name}}"
state: present
The biggest issue was the access and secret keys. The ec2_elb module doesn't seem to use the environment variables or read ~/.boto, so I had to pass them manually.
The ansible_ec2_* variables are available if you use the ec2_facts module. You can fill these parameters by yourself of course.
The below playbook should work for ec2 server creation and registering it to the elb. Make sure you have the variables set properly or you can also hard-code the variable values in playbook.
- name: Creating webserver
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: t1.micro
image: "{{ ami_id }}"
wait: yes
assign_public_ip: yes
group_id: ["{{ sg_webserver }}"]
vpc_subnet_id: "{{ PublicSubnet }}"
instance_tags: '{"Name": "webserver", "Environment": "Dev"}
register: webserver
- name: Adding Webserver to ELB
local_action:
module: ec2_elb
ec2_elbs: "{{ elb_name }}"
instance_id: "{{ item.id }}"
state: 'present'
region: "{{ region }}"
with_items: nat.instances