Promote command does not seem to work on the version of Ansible that I am using.
So I am trying to create a new database as a replica of an existing one and after making it master, delete the source database.
I was trying to do it like this:
Make replica
Promote replica
Delete source database
But now I am thinking of this:
Create new database from source database last snapshot [as master from the beginning]
Delete the source database
How would that playbook go?
My playbook:
- hosts: localhost
vars:
source_db_name: "{{ SOURCE_DB }}" # stagingdb
new_db_name: "{{ NEW_DB }}" # stagingdb2
tasks:
- name: Make RDS replica
local_action:
module: rds
region: us-east-1
command: replicate
instance_name : "{{ new_db_name }}"
source_instance: "{{ source_db_name }}"
wait: yes
wait_timeout: 900 # wait 15 minutes
# Notice - not working [Ansible bug]
- name: Promote RDS replica
local_action:
module: rds
region: us-east-1
command: promote
instance_name: "{{ new_db_name }}" # stagingdb2
backup_retention: 0
wait: yes
wait_timeout: 300
- name: Delete source db
local_action:
command: delete
instance_name: "{{ source_db_name }}"
region: us-east-1
You just need to use the restore command in the RDS module.
Your playbook would then look something like:
- hosts: localhost
connection: local
gather_facts: yes
vars:
date: "{{ ansible_date_time.year }}-{{ ansible_date_time.month }}-{{ ansible_date_time.day }}-{{ ansible_date_time.hour }}-{{ ansible_date_time.minute }}"
source_db_name: "{{ SOURCE_DB }}" # stagingdb
new_db_name: "{{ NEW_DB }}" # stagingdb2
snapshot_name: "snapshot-{{ source_db_name }}--{{ date }}"
tasks:
- name : Take RDS snapshot
rds :
command : snapshot
instance_name : "{{ source_db_name }}"
snapshot : "{{ snapshot_name }}"
wait : yes
register: snapshot_out
- name : get facts
rds :
command : facts
instance_name : "{{ instance_name }}"
register: db_facts
- name : Restore RDS from snapshot
rds :
command : restore
instance_name : "{{ new_db_name }}"
snapshot : "{{ snapshot_name }}"
instance_type : "{{ db_facts.instance.instance_type }}"
subnet : primary # Unfortunately this isn't returned by db_facts
wait : yes
wait_timeout : 1200
- name : Delete source db
rds :
command : delete
instance_name : "{{ source_db_name }}"
There's a couple of extra tricks in there:
I set connection to local at the start of the play so, when combined with hosts: localhost all of the tasks will be local tasks.
I build a date time stamp that looks like YYYY-mm-dd-hh-mm from the Ansible host's own facts (from gather_facts and it only targeting localhost). This is then used for the snapshot name to make sure that we create it (if one exists with the same name then Ansible won't create another snapshot - something that could be bad in this case as it would use an older snapshot before deleting your source database).
I fetch the facts about the RDS instance in a task and use that to set the instance type to be the same as the source database. If you don't want that then you can define the instance_type directly and remove the whole get facts task
Related
I am trying to create an AMI from an EC2. However, before doing so I would like to check if the AMI with the same name exists. If it does, I would like to deregister it before attempting to create the AMI with the given name.
Issue1: How do I run AMI deregister ONLY if the AMI already exists.
Issue2: When the deregister call has been madem, how do I wait for before creating the AMI with the same name?
Here is what I have so far
- name: Check if AMI with the same name exists
ec2_ami_find:
name: "{{ ami_name }}"
register: ami_find
- name: Deregister AMI if it exists
ec2_ami:
image_id: "{{ ami_find.results[0].ami_id }}"
state: absent
when: ami_find.results[0].state == 'available'
- pause:
minutes: 5
- name: Creating the AMI from of the instance
ec2_ami:
instance_id: "{{ item.id }}"
wait: yes
name: "{{ ami_name }}"
delegate_to: 127.0.0.1
with_items: "{{ ec2.instances }}"
register: image
EDIT:
I am able to deregister the AMI when the state is 'available' and wait for a few minutes before attempting to create the new AMI (which has the same name). However, sometimes I get the following response. In which case I would like to continue with creating AMI.
TASK [createAMI : Check if AMI with the same name exists] **********************
ok: [local] => {"changed": false, "results": []}
First check if the result is not empty and then check the state.
when: ami_find.results | length and ami_find.results[0].state == 'available'
Thanks to the comment above, I managed to add the following to the Deregister task and managed to deal with the empty response.
- name: Check if AMI with the same name exists
ec2_ami_find:
name: "{{ ami_name }}"
register: ami_find
- name: Deregister AMI if it exists
ec2_ami:
image_id: "{{ ami_find.results[0].ami_id }}"
state: absent
when: ami_find.results | length and ami_find.results[0].state == 'available'
As the title states, I am trying to prevent an instance from selecting the same AZ twice in a row. My current role is setup to rotate based on available ips. It works fine but when I run multiple servers, it keeps going to the same AZ. I need to find a way to prevent it from selecting the same AZ twice in a row.
This is a role that is called during my overall server build
#gather vpc and subnet facts to determine where to build the server
- name: gather subnet facts
ec2_vpc_subnet_facts:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
aws_region: "{{ aws_region }}"
register: ec2_subnets
- name: initialize build subnets
set_fact:
build_subnet: ""
free_ips: 0
previous_subnet_az: ""
- name: pick usable ec2_subnets
set_fact:
# Item.id is sets the current builds subnet Id
build_subnet: "{{ item.id }}"
free_ips: "{{ item.available_ip_address_count|int }}"
# Just for debugging and does not work
previous_subnet_az: " {{ item.availability_zone }}"
when: ("{{ item.available_ip_address_count|int }}" > free_ips) and ("ansible_subnet" in "{{ item.tags }}") and ("{{ previous_subnet_az|string }}" != "{{ item.availability_zone|string }}")
# Each subnet in the list
with_items: '{{ec2_subnets.subnets}}'
register: build_subnets
- debug: var=build_subnets var=build_subnet var=previous_subnet_az
var=selected_subnet
Created a play to set the previous subnet when null. Then did a basic conditional that set the fact of the previous subnet once finishing the first iterations. Its now solved, thanks everyone.
I am trying to learn the Ansible with all my AWS stuff. So the first task which I want to do is creation the basic EC2 instance with mounted volumes.
I wrote the Playbook according to Ansible docs, but it doesn't really work. My Playbook:
# The play operates on the local (Ansible control) machine.
- name: Create a basic EC2 instance v.1.1.0 2015-10-14
hosts: localhost
connection: local
gather_facts: false
# Vars.
vars:
hostname: Test_By_Ansible
keypair: MyKey
instance_type: t2.micro
security_group: my security group
image: ami-d05e75b8 # Ubuntu Server 14.04 LTS (HVM)
region: us-east-1 # US East (N. Virginia)
vpc_subnet_id: subnet-b387e763
sudo: True
locale: ru_RU.UTF-8
# Launch instance. Register the output.
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: yes
wait: true
wait_timeout: 500
count: 1 # number of instances to launch
instance_tags:
Name: "{{ hostname }}"
os: Ubuntu
type: WebService
register: ec2
# Create and attach a volumes.
- name: Create and attach a volumes
ec2_vol:
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
volume_size: 1 # in GB
volume_type: gp2
device_name: /dev/sdf
with_items: ec2.instances
register: ec2_vol
# Configure mount points.
- name: Configure mount points - mount device by name
mount: name=/system src=/dev/sda1 fstype=ext4 opts='defaults nofail 0 2' state=present
mount: name=/data src=/dev/xvdf fstype=ext4 opts='defaults nofail 0 2' state=present
But this Playbook crushes on volumes mount with error:
fatal: [localhost] => One or more undefined variables: 'item' is undefined
How can I resolve this?
You seem to have copy/pasted a lot of stuff all at once, and rather than needing a specific bit of information that SO can help you with, you need to go off and learn the basics of Ansible so you can think through all the individual bits that don't match up in this playbook.
Let's look at the specific error that you're hitting - item is undefined. It's triggered here:
# Create and attach a volumes.
- name: Create and attach a volumes
ec2_vol:
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
volume_size: 1 # in GB
volume_type: gp2
device_name: /dev/sdf
with_items: ec2.instances
register: ec2_vol
This task is meant to be looping through every item in a list, and in this case the list is ec2.instances. It isn't, because with_items should be de-indented so it sits level with register.
If you had a list of instances (which you don't, as far as I can see), it'd use the id for the for each one in that {{ item.id }} line... but then probably throw an error, because I don't think they'd all be allowed to have the same name.
Go forth and study, and you can figure out this kind of detail.
I am using ec2.py dynamic inventory for provisioning with ansible.
I have placed the ec2.py in /etc/ansible/hosts file and marked it executable.
I also have the ec2.ini file in /etc/ansible/hosts.
[ec2]
regions = us-west-2
regions_exclude = us-gov-west-1,cn-north-1
destination_variable = public_dns_name
vpc_destination_variable = ip_address
route53 = False
all_instances = True
all_rds_instances = False
cache_path = ~/.ansible/tmp
cache_max_age = 0
nested_groups = False
group_by_instance_id = True
group_by_region = True
group_by_availability_zone = True
group_by_ami_id = True
group_by_instance_type = True
group_by_key_pair = True
group_by_vpc_id = True
group_by_security_group = True
group_by_tag_keys = True
group_by_tag_none = True
group_by_route53_names = True
group_by_rds_engine = True
group_by_rds_parameter_group = True
Above is my ec2.ini file
---
- hosts: localhost
connection: local
gather_facts: yes
vars_files:
- ../group_vars/dev_vpc
- ../group_vars/dev_sg
- ../hosts_vars/ec2_info
vars:
instance_type: t2.micro
tasks:
- name: Provisioning EC2 instance
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: yes
group_id: ["{{ sg_npm }}", "{{sg_ssh}}"]
vpc_subnet_id: "{{ PublicSubnet }}"
source_dest_check: false
instance_tags: '{"Name": "EC2", "Environment": "Development"}'
register: ec2
- name: associate new EIP for the instance
local_action:
module: ec2_eip
region: "{{ region }}"
instance_id: "{{ item.id }}"
with_items: ec2.instances
- name: Waiting for NPM Server to come-up
local_action:
module: wait_for
host: "{{ ec2 }}"
state: started
delay: 5
timeout: 200
- include: ec2-configure.yml
Now the configuring script is as follows
- name: Configure EC2 server
hosts: tag_Name_EC2
user: ec2-user
sudo: True
gather_facts: True
tasks:
- name: Install nodejs related packages
yum: name={{ item }} enablerepo=epel state=present
with_items:
- nodejs
- npm
However when the configure script is called, the second script results into no hosts found.
If I execute the ec2-configure.yml just alone and if the EC2 server is up & running then it is able to find it and configure it.
I added the wait_for to make sure that the instance is in running state before the ec2-configure.yml is called.
Would appreciate if anyone can point my error. Thanks
After researching I came to know that the dynamic inventory doesnt refresh between playbook calls, it will only refresh if you are executing the playbook seprately.
However I was able to resolve the issue by using add_host command.
- name: Add Server to inventory
local_action: add_host hostname={{ item.public_ip }} groupname=webserver
with_items: webserver.instances
With ansible 2.0+, you refresh the dynamic inventory in the middle of the playbook as the task like this:
- meta: refresh_inventory
To extend this a bit, If you are getting problem with the cache in your playbook, then you can use it like this:
- name: Refresh the ec2.py cache
shell: "./inventory/ec2.py --refresh-cache"
changed_when: no
- name: Refresh inventory
meta: refresh_inventory
where ./inventory is the path to your dynamic inventory, please adjust it accordingly.
Hope this will help you.
Configure EC2 server play can't find any hosts from EC2 dynamic inventory because the new instance was added in the first play of the playbook - during the same execution. Group tag_Name_EC2 didn't exist in the inventory when the inventory was read and thus can't be found.
When you run the same playbook again Configure EC2 server should find the group.
We have used the following workaround to guide users in this kind of situations.
First, provision the instance:
tasks:
- name: Provisioning EC2 instance
local_action:
module: ec2
...
register: ec2
Then add a new play before ec2-configure.yml. The play uses ec2 variable that was registered in Provisioning EC2 instance and will fail and exit the playbook if any instances were launched:
- name: Stop and request a re-run if any instances were launched
hosts: localhost
gather_facts: no
tasks:
- name: Stop if instances were launched
fail: msg="Re-run the playbook to load group variables from EC2 dynamic inventory for the just launched instances!"
when: ec2.changed
- include: ec2-configure.yml
You can also refresh the cache:
ec2.py --refresh-cache
Or if your using as the Ansible host file:
/etc/ansible/hosts --refresh-cache
I am trying to use Ansible to create an EC2 instance, configure a web server and then register it to a load balancer. I have no problem creating the EC2 instance, nor configuring the web server but all attempts to register it against an existing load balancer fail with varying errors depending on the code I use.
Has anyone had success in doing this?
Here are the links to the Ansible documentation for the ec2 and ec2_elb modules:
http://docs.ansible.com/ec2_module.html
http://docs.ansible.com/ec2_elb_module.html
Alternatively, if it is not possible to register the EC2 instance against the ELB post creation, I would settle for another 'play' that collects all EC2 instances with a certain name and loops through them, adding them to the ELB.
Here's what I do that works:
- name: Add machine to elb
local_action:
module: ec2_elb
aws_access_key: "{{lookup('env', 'AWS_ACCESS_KEY')}}"
aws_secret_key: "{{lookup('env', 'AWS_SECRET_KEY')}}"
region: "{{ansible_ec2_placement_region}}"
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{elb_name}}"
state: present
The biggest issue was the access and secret keys. The ec2_elb module doesn't seem to use the environment variables or read ~/.boto, so I had to pass them manually.
The ansible_ec2_* variables are available if you use the ec2_facts module. You can fill these parameters by yourself of course.
The below playbook should work for ec2 server creation and registering it to the elb. Make sure you have the variables set properly or you can also hard-code the variable values in playbook.
- name: Creating webserver
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: t1.micro
image: "{{ ami_id }}"
wait: yes
assign_public_ip: yes
group_id: ["{{ sg_webserver }}"]
vpc_subnet_id: "{{ PublicSubnet }}"
instance_tags: '{"Name": "webserver", "Environment": "Dev"}
register: webserver
- name: Adding Webserver to ELB
local_action:
module: ec2_elb
ec2_elbs: "{{ elb_name }}"
instance_id: "{{ item.id }}"
state: 'present'
region: "{{ region }}"
with_items: nat.instances