Ansible "ec2_asg" module keeps changing the desired/min/max capacity settings - amazon-web-services

For cost reasons, our ASG's in the QA environment run with desired/min/max capacity set to "1". That's not the case for Production but since we use the same code for QA and Prod deployment (minus a few variables of course) this is causing problems with the QA automation jobs.
- name: create autoscale groups original_lc
ec2_asg:
name: "{{ app_name }}"
target_group_arns: "{{alb_target_group_facts.target_groups[0].target_group_arn}}"
launch_config_name: "{{ launch_config.name }}"
min_size: 1
max_size: 1
desired_capacity: 1
region: "{{ region }}"
vpc_zone_identifier: "{{ subnets | join(',') }}"
health_check_type: "{{health_check}}"
replace_all_instances: yes
wait_for_instances: false
replace_batch_size: '{{ rollover_size }}'
lc_check: false
default_cooldown: "{{default_cooldown}}"
health_check_period: "{{health_check_period}}"
notification_topic: "{{redeem_notification_group}}"
tags:
- Name : "{{ app_name }}"
- Application: "{{ tag_Application }}"
- Product_Owner: "{{ tag_Product_Owner }}"
- Resource_Owner: "{{ tag_Resource_Owner }}"
- Role: "{{ tag_Role }}"
- Service_Category: "{{ tag_Service_Category }}"
register: asg_original_lc
On the first run, the "ec2_asg" module creates the group properly, with the correct desired/min/max settings.
But when we run the job a second time to update the same ASG, it changes desired/min/max to "2" in AWS. We don't want that. We just want it to rotate out that one instance in the group. Is there a way to achieve that?

Related

Check if EC2 instances exists and running then skip deployment

I am trying to run a playbook that calls a role to deploy some EC2 instances, everything works fine except that I want to put a condition when the EC2 instance exists and in running state to skip the deployment I used the following to retrieve the ec2_infos :
## Check if an instance with same name exist on AWS
- name: Get {{ ec2_name }} infos
ec2_instance_info:
region: "us-east-1"
filters:
"tag:Name": "{{ ec2_name }}"
instance-state-name: [ "running"]
register: ec2_infos
- name: DEBUG
debug: msg="{{ aws_ec2_infos }}"
and on the Deployment stage my condition is as follows :
- name: "{{ ec2_description }} - {{ ec2_name }}"
cloudformation:
stack_name: "some name "
state: "present"
region: "{{ aws_region }}"
template: "PATH/ec2.json"
template_parameters:
Name: "{{ ec2_name }}"
Description: "{{ ec2_description }}"
KeyName: "{{key_name }}"
KmsKeyId: "{{ key_id }}"
GroupSet: "{{ id }}"
IamInstanceProfile: "{{ name }}"
Type: "OS"
**when: ec2_infos.state[0].name != 'running'**
but I get an error that says :
"msg": "The conditional check 'aws_ec2_infos.state[0].name != 'running'' failed. The error was: error while evaluating conditional (aws_ec2_infos.state[0].name != 'running'): 'dict object' has no attribute
I think I am missing something in my condition but I can't find what exactly. Any tip or advice is more than welcome
As #benoit and #mdaniel said the error was in my understanding the condition should be :
aws_ec2_infos.instances[0].state.name != 'running'

Is it possible to loop into two different lists in the same playbook (Ansible)?

I'm writing a Playbook Ansible and I want to loop into two different lists.
I now that I can use with_items to loop in a list, but can I use with_items twice in the same playbook?
Here is what I want to do:
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item }}"
deploy: false
fabric: "{{ item }}"
state: merged
with_items:
- "{{ attachs }}"
"{{ fabric }}"
register: networks
So for the first call, I want to use the playbook with fabric[0] and attachs[0].
For the second call, I want to use the playbook with fabric[1] and attachs[1].
And so on...
Can someone help me please?
What you are looking to achieve is what was with_together and that is, now, recommanded to achieve with the zip filter.
So: loop: "{{ attachs | zip(fabric) | list }}".
Where the element of the first list (attachs) would be item.0 and the element of the second list (fabric) would be item.1.
- name: Deploy the network in fabric 1 and fabric 2
tags: [merged]
role_network:
config:
- net_name: "{{ networkName }}"
vrf_name: "{{ vrf }}"
net_id: 30010
net_template: "{{ networkTemplate }}"
net_extension_template: "{{ networkExtensionTemplate }}"
vlan_id: "{{ vlan }}"
gw_ip_subnet: "{{ gw }}"
attach: "{{ item.0 }}"
deploy: false
fabric: "{{ item.1 }}"
state: merged
loop: "{{ attachs | zip(fabric) | list }}"
register: networks

Ansible role to Rotate Subnets Availability zone in multiple instance execution for AWS

As the title states, I am trying to prevent an instance from selecting the same AZ twice in a row. My current role is setup to rotate based on available ips. It works fine but when I run multiple servers, it keeps going to the same AZ. I need to find a way to prevent it from selecting the same AZ twice in a row.
This is a role that is called during my overall server build
#gather vpc and subnet facts to determine where to build the server
- name: gather subnet facts
ec2_vpc_subnet_facts:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
aws_region: "{{ aws_region }}"
register: ec2_subnets
- name: initialize build subnets
set_fact:
build_subnet: ""
free_ips: 0
previous_subnet_az: ""
- name: pick usable ec2_subnets
set_fact:
# Item.id is sets the current builds subnet Id
build_subnet: "{{ item.id }}"
free_ips: "{{ item.available_ip_address_count|int }}"
# Just for debugging and does not work
previous_subnet_az: " {{ item.availability_zone }}"
when: ("{{ item.available_ip_address_count|int }}" > free_ips) and ("ansible_subnet" in "{{ item.tags }}") and ("{{ previous_subnet_az|string }}" != "{{ item.availability_zone|string }}")
# Each subnet in the list
with_items: '{{ec2_subnets.subnets}}'
register: build_subnets
- debug: var=build_subnets var=build_subnet var=previous_subnet_az
var=selected_subnet
Created a play to set the previous subnet when null. Then did a basic conditional that set the fact of the previous subnet once finishing the first iterations. Its now solved, thanks everyone.

Ansible running one command out of many locally in a loop

Background,
I'm trying to create a loop that iterates over hash read from qa.yml file and for every user in the list it tries to find a file on the local server (public key), once the file is found it creates the user on remote machine and copies the public key to authorized_key on remote machine.
I'm trying to implement it in a way of iteration, so in order to update the keys or add more users keys I need to change the .yml list and place the public key file in the proper place. However I can't get the local_action + find working.
---
- hosts: tag_Ansible_CLOUD_QA
vars_files:
- ../users/qa.yml
- ../users/groups.yml
remote_user: ec2-user
sudo: yes
tasks:
- name: Create groups
group: name="{{ item.key }}" state=present
with_dict: "{{ user_groups }}"
- name: Create remote users QA
user: name="{{ item.key }}" comment="user" group=users groups="qa"
with_dict: "{{ qa_users }}"
- name: Erase previous authorized keys QA
shell: rm -rf /home/"{{ item.key }}"/.ssh/authorized_keys
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users QA
local_action:
find: paths="{{'/opt/pubkeys/2016/q2/'}}" patterns="{{ item.key }}"
register: result
authorized_key: user="{{ item.key }}" key="{{ lookup('file', result) }}"
with_dict: "{{ qa_users }}"
Hash:
qa_users:
user1:
name: User 1
user2:
name: User 2
You're cramming two tasks into a single task item in that final task so Ansible isn't going to like that.
Splitting the task properly should work:
- name: Find keys
local_action: find paths="{{'/opt/pubkeys/2016/q2/'}}" patterns="{{ item.key }}"
register: result
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users QA
authorized_key: user="{{ item.0.key }}" key="{{ lookup('file', item.1.stdout) }}"
with_together:
- "{{ qa_users }}"
- result
The second task then loops over the dictionary and the result from the previous task using a with_together loop which advances through the two data structures in step.
However, this looks like a less than ideal way to solve your problem.
If you look at what your tasks here are trying to do you could replace it more simply with something like this:
- name: Add public keys to remote users QA
authorized_key: user="{{ item.key }}" key="{{ lookup('file', '/opt/pubkeys/2016/q2/' + item.key ) }}"
with_dict:
- "{{ qa_users }}"
You can also remove the thid task where you cleared down the user's previous keys by simply using the exclusive parameter of the authorized_keys module:
- name: Add public keys to remote users QA
authorized_key: user="{{ item.key }}" key="{{ lookup('file', '/opt/pubkeys/2016/q2/' + item.key ) }}" exclusive=yes
with_dict:
- "{{ qa_users }}"
Also, it might be a case of you trying to simplify things in an odd way for the question but your data structures you are using are less than ideal right now so I'd take a look at that if that's really what they look like.
Thank you #ydaetskcoR for sharing the right approach, the following solution did the dynamic public key distribution for me, when files residing on local machine and provisioned on remote EC2 machines:
---
- hosts: tag_Ansible_CLOUD_QA
vars_files:
- ../users/groups.yml
- ../users/qa.yml
remote_user: ec2-user
become: yes
become_method: sudo
tasks:
- name: Find user matching key files
become_user: jenkins
local_action: find paths="{{'/opt/pubkeys/2016/q1/'}}" patterns="{{ '*' + item.key + '*' }}"
register: pub_key_files
with_dict: "{{ qa_users }}"
- name: Create groups
group: name="{{ item.key }}" state=present
with_dict: "{{ user_groups }}"
- name: Allow test users to have passwordless sudo
lineinfile: "dest=/etc/sudoers state=present regexp='^%{{ item.key }} ALL=.*ALL.* NOPASSWD: ALL' line='%{{ item.key }} ALL=(ALL) NOPASSWD: ALL'"
with_dict: "{{ user_groups }}"
- name: Create remote users qa
user: name="{{ item.key }}" comment="user" group=users groups="qa"
with_dict: "{{ qa_users }}"
- name: Add public keys to remote users qa
#debug: "msg={{ 'User:' + item.item.key + ' KeyFile:' + item.files.0.path }}"
authorized_key: user="{{ item.item.key }}" key="{{ lookup('file', item.files.0.path) }}" exclusive=yes
with_items: "{{ pub_key_files.results }}"
This is the command line to get dynamic inventory based on EC2 Tags:
ansible-playbook -i inventory/ec2.py --private-key <path to your key file> --extra-vars '{"QUATER":"q1"}' credentials/distribute-public-key.yml

how to add a disk to vcenter guest using Ansible

I'm attempting to add a second disk to a vmware vcenter instance.
Here is what I have:
- name: "Modifying ..."
local_action:
module: vsphere_guest
vcenter_hostname: "{{ vcenter.hostname }}"
username: "{{ vcenter_user[datacenter]['username'] }}"
password: "{{ vcenter_user[datacenter]['password'] }}"
guest: "{{ inventory_hostname }}"
# Looky looky heeya ...#
state: reconfigured
########################
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: "{{ datacenter }} {{ purpose |replace('_',' ') }}"
vm_disk:
disk1:
size_gb: 50
type: thin
datastore: "{{ vcenter.datastore }}"
disk2:
size_gb: 200
type: thin
datastore: "{{ vcenter.datastore }}"
vm_hardware:
memory_mb: "{{ vm.memory|int }}"
num_cpus: "{{ vm.cpus|int }}"
osid: "{{ os.id }}"
esxi:
datacenter: "{{ esxi.datacenter }}"
hostname: "{{ esxi.hostname }}"
So the vcenter sees the reconfigure and there are no errors displayed.
Also there are no errors on the console when I runt the playbook.
It just simply does not add the second disk.
So is there a way to add the disk or will I have to write a python script to do it?
Thanks.
The function def reconfigure_vm in the vsphere_guest module does only include code for changing the RAM and the CPU. But i don't see any code for changing the other hardware. This is only possible while creating a new VM at the moment.