I am working on autoscaling ansible project can somebody tell me how I can delete old launch configuration using ansible playbooks.
Thanks
Some time ago, I've created a pull request for a new Ansible module that you could use to find old launch configurations, and delete them.
For example, if you'd like to keep the top 10 most recent launch configurations, and remove old ones, you could have a task like:
---
- name: "Find old Launch Configs"
ec2_lc_find:
profile: "{{ boto_profile }}"
region: "{{ aws_region }}"
name_regex: "*nameToFind*"
sort: true
sort_end: -10
register: old_launch_config
- name: "Remove old Launch Configs"
ec2_lc:
profile: "{{ boto_profile }}"
region: "{{ aws_region }}"
name: "{{ item.name }}"
state: absent
with_items: old_launch_config.results
ignore_errors: yes
Related
For cost reasons, our ASG's in the QA environment run with desired/min/max capacity set to "1". That's not the case for Production but since we use the same code for QA and Prod deployment (minus a few variables of course) this is causing problems with the QA automation jobs.
- name: create autoscale groups original_lc
ec2_asg:
name: "{{ app_name }}"
target_group_arns: "{{alb_target_group_facts.target_groups[0].target_group_arn}}"
launch_config_name: "{{ launch_config.name }}"
min_size: 1
max_size: 1
desired_capacity: 1
region: "{{ region }}"
vpc_zone_identifier: "{{ subnets | join(',') }}"
health_check_type: "{{health_check}}"
replace_all_instances: yes
wait_for_instances: false
replace_batch_size: '{{ rollover_size }}'
lc_check: false
default_cooldown: "{{default_cooldown}}"
health_check_period: "{{health_check_period}}"
notification_topic: "{{redeem_notification_group}}"
tags:
- Name : "{{ app_name }}"
- Application: "{{ tag_Application }}"
- Product_Owner: "{{ tag_Product_Owner }}"
- Resource_Owner: "{{ tag_Resource_Owner }}"
- Role: "{{ tag_Role }}"
- Service_Category: "{{ tag_Service_Category }}"
register: asg_original_lc
On the first run, the "ec2_asg" module creates the group properly, with the correct desired/min/max settings.
But when we run the job a second time to update the same ASG, it changes desired/min/max to "2" in AWS. We don't want that. We just want it to rotate out that one instance in the group. Is there a way to achieve that?
I want to run ec2_instance_facts to find an instance by name. However, I must be doing something wrong because I cannot get the filter to actually work. The following returns everything in my set AWS_REGION:
- ec2_instance_facts:
filters:
"tag:Name": "{{myname}}"
register: ec2_metadata
- debug: msg="{{ ec2_metadata.instances }}"
The answer is to use the ec2_remote_facts module, not the ec2_instance_facts module.
- ec2_remote_facts:
filters:
"tag:Name": "{{myname}}"
register: ec2_metadata
- debug: msg="{{ ec2_metadata.instances }}"
Based on the documentation ec2_remote_facts is marked as DEPRECATED from ansible version 2.8 in favor of using ec2_instance_facts.
This is working good for me:
- name: Get instances list
ec2_instance_facts:
region: "{{ region }}"
filters:
"tag:Name": "{{ myname }}"
register: ec2_list
- debug: msg="{{ ec2_metadata.instances }}"
Maybe the filte is not being applied? Can you go through the results in the object?
I am trying to create an AMI from an EC2. However, before doing so I would like to check if the AMI with the same name exists. If it does, I would like to deregister it before attempting to create the AMI with the given name.
Issue1: How do I run AMI deregister ONLY if the AMI already exists.
Issue2: When the deregister call has been madem, how do I wait for before creating the AMI with the same name?
Here is what I have so far
- name: Check if AMI with the same name exists
ec2_ami_find:
name: "{{ ami_name }}"
register: ami_find
- name: Deregister AMI if it exists
ec2_ami:
image_id: "{{ ami_find.results[0].ami_id }}"
state: absent
when: ami_find.results[0].state == 'available'
- pause:
minutes: 5
- name: Creating the AMI from of the instance
ec2_ami:
instance_id: "{{ item.id }}"
wait: yes
name: "{{ ami_name }}"
delegate_to: 127.0.0.1
with_items: "{{ ec2.instances }}"
register: image
EDIT:
I am able to deregister the AMI when the state is 'available' and wait for a few minutes before attempting to create the new AMI (which has the same name). However, sometimes I get the following response. In which case I would like to continue with creating AMI.
TASK [createAMI : Check if AMI with the same name exists] **********************
ok: [local] => {"changed": false, "results": []}
First check if the result is not empty and then check the state.
when: ami_find.results | length and ami_find.results[0].state == 'available'
Thanks to the comment above, I managed to add the following to the Deregister task and managed to deal with the empty response.
- name: Check if AMI with the same name exists
ec2_ami_find:
name: "{{ ami_name }}"
register: ami_find
- name: Deregister AMI if it exists
ec2_ami:
image_id: "{{ ami_find.results[0].ami_id }}"
state: absent
when: ami_find.results | length and ami_find.results[0].state == 'available'
I am using Ansible latest version 2.0.1.0
I try to create new AWS autoscale config like this
- name: Update ASG with new LC and replace all instances
ec2_asg:
name: "{{ asg_name }}"
launch_config_name: "{{ lc_name }}-{{ timestamp }}"
health_check_period: 300
health_check_type: ELB
min_size: 1
max_size: 5
desired_capacity: 1
region: "{{ aws_region }}"
load_balancers: "{{ lb_name }}"
vpc_zone_identifier: "{{ vpc_zones }}"
tags:
- Name: "{{ asg_name }}"
- Environment: "{{ stack_env }}"
replace_all_instances: yes
It creates the autoscale group, but when I rerun the playbook the number of min/max/desired instances in AWS autoscale group will be sum up to 2/5/6. Basically it will sum up the config inside AWS and the setting inside playbook.
The documentation of ec2_asg said it will replace/update the number inside the config. Did I miss something here?
Thank you!
Update
Because I am using replace_all_instances then Ansible will rolling replace all running instances in the ASG. But sometimes it got timeout when waiting new instance up then the playbook exit. So the number of min/max/desire number of instances in ASG was not updated to correct number.
Promote command does not seem to work on the version of Ansible that I am using.
So I am trying to create a new database as a replica of an existing one and after making it master, delete the source database.
I was trying to do it like this:
Make replica
Promote replica
Delete source database
But now I am thinking of this:
Create new database from source database last snapshot [as master from the beginning]
Delete the source database
How would that playbook go?
My playbook:
- hosts: localhost
vars:
source_db_name: "{{ SOURCE_DB }}" # stagingdb
new_db_name: "{{ NEW_DB }}" # stagingdb2
tasks:
- name: Make RDS replica
local_action:
module: rds
region: us-east-1
command: replicate
instance_name : "{{ new_db_name }}"
source_instance: "{{ source_db_name }}"
wait: yes
wait_timeout: 900 # wait 15 minutes
# Notice - not working [Ansible bug]
- name: Promote RDS replica
local_action:
module: rds
region: us-east-1
command: promote
instance_name: "{{ new_db_name }}" # stagingdb2
backup_retention: 0
wait: yes
wait_timeout: 300
- name: Delete source db
local_action:
command: delete
instance_name: "{{ source_db_name }}"
region: us-east-1
You just need to use the restore command in the RDS module.
Your playbook would then look something like:
- hosts: localhost
connection: local
gather_facts: yes
vars:
date: "{{ ansible_date_time.year }}-{{ ansible_date_time.month }}-{{ ansible_date_time.day }}-{{ ansible_date_time.hour }}-{{ ansible_date_time.minute }}"
source_db_name: "{{ SOURCE_DB }}" # stagingdb
new_db_name: "{{ NEW_DB }}" # stagingdb2
snapshot_name: "snapshot-{{ source_db_name }}--{{ date }}"
tasks:
- name : Take RDS snapshot
rds :
command : snapshot
instance_name : "{{ source_db_name }}"
snapshot : "{{ snapshot_name }}"
wait : yes
register: snapshot_out
- name : get facts
rds :
command : facts
instance_name : "{{ instance_name }}"
register: db_facts
- name : Restore RDS from snapshot
rds :
command : restore
instance_name : "{{ new_db_name }}"
snapshot : "{{ snapshot_name }}"
instance_type : "{{ db_facts.instance.instance_type }}"
subnet : primary # Unfortunately this isn't returned by db_facts
wait : yes
wait_timeout : 1200
- name : Delete source db
rds :
command : delete
instance_name : "{{ source_db_name }}"
There's a couple of extra tricks in there:
I set connection to local at the start of the play so, when combined with hosts: localhost all of the tasks will be local tasks.
I build a date time stamp that looks like YYYY-mm-dd-hh-mm from the Ansible host's own facts (from gather_facts and it only targeting localhost). This is then used for the snapshot name to make sure that we create it (if one exists with the same name then Ansible won't create another snapshot - something that could be bad in this case as it would use an older snapshot before deleting your source database).
I fetch the facts about the RDS instance in a task and use that to set the instance type to be the same as the source database. If you don't want that then you can define the instance_type directly and remove the whole get facts task