If conditions for parameters in Ansible - if-statement

This is task to add cronjob to crontab:
- name: Add job triggering logs rotation for clusters.
cron:
cron_file: '/etc/crontab'
user: 'root'
name: 'logrotate'
minute: *
hour: '*/4'
job: '/etc/cron.daily/logrotate'
state: present
What I want to accomplish is to add minute: */5 and hour: * if dev in inventory_hostname, else add minute: 0 and hour: */4.
Is there any way to do this adding conditions in minute and hour parameters?
Can I deal with this using template, but just to add this two parameters in mentioned task?

This is one way to achieve this with the ternary filter:
- name: Add job triggering logs rotation for clusters.
vars:
is_dev: "{{ 'dev' in inventory_hostname }}"
cron:
cron_file: '/etc/crontab'
user: 'root'
name: 'logrotate'
minute: "{{ is_dev | ternary('*/5', '*') }}"
hour: "{{ (is_dev | ternary('*', '*/4' }}"
job: '/etc/cron.daily/logrotate'
state: present

Related

How to create the Count tags name with sequential numbers using ansible script

I create 2 windows aws machine using exact_count tag as 2.
It creates the both of 2 AWS machine with same name.
For example:
1) itg-Web-windows
2) itg-web-windows
I want to create the machine Name as instance_tags:
1)itg-windows-web-1
2)itg-windows-web-2
Below are my code:
name: ensure instances are running
ec2:
region: "{{ region }}"
image: "{{ image_id }}"
group_id: sg-1234
vpc_subnet_id: subnet-5678
instance_tags:
Name: "itg-windows-web"
exact_count: 2
count_tag:
Name: "itg-windows-web"`
register: ec2_result
This will create servers with name tags web_server_1, web_server_3 and web_server_5:
- name: create instances
ec2:
- image: <your_ami>
instance_type: t2.micro
key_name: <your_ssh_key>
region: us-east-1
vpc_subnet_id: <your_subnet_id>
count_tag:
Name: "web_server_{{ item }}"
exact_count: 1
instance_tags:
Name: "web_server_{{ item }}"
with_items: ['1', '3', '5']
Use the below ansible template:
---
- name: A sample template
hosts: local
connection: local
gather_facts: False
tasks:
- name: create instance
ec2:
keypair: test-ssh-key
instance_type: t2.micro
image: ami-abcd1234
wait: yes
instance_tags:
ec2type: web
exact_count: "{{ count }}"
count_tag:
ec2type: web
region: us-east-1
vpc_subnet_id: subnet-1234abcd
register: ec2
- name: generate sequence id for tagging
debug: msg="{{ item }}"
no_log: true
with_sequence: start="{{ startindex }}" end="{{ count }}" format=%02d
register: sequence
- name: tag instances
no_log: true
ec2_tag:
region: us-east-1
resource: "{{ item.0.id }}"
tags:
Name: "itg-windows-web-{{ item.1.msg }}"
with_together:
- "{{ ec2.instances }}"
- "{{ sequence.results }}"
command:
ansible-playbook -i ./hosts ec2-basic.yml --extra-vars "startindex=1
count=2"
Invocation-1:
ansible-playbook -i ./hosts ec2-basic.yml --extra-vars "startindex=1 count=2"
This will create 2 instances and attach name tag itg-windows-web-01 and itg-windows-web-02 to it.
Invocation 2:
ansible-playbook -i ./hosts ec2-basic.yml --extra-vars "startindex=3 count=4"
This will add 2 more instances and attach name tag itg-windows-web-03 and itg-windows-web-04 to it.
All these instances are grouped by ec2type tag.
How it works:
Use a custom tag other than Name tag for attribute count_tag. If you use Name tag, then the same tag-value is assigned for all the instances that are created(which defeats your purpose). In the above script, I have used ec2type: web as my instance_tags and count_tag. So ansible will use this tag to determine how many nodes should run based on the specific tag criteria.
The count value which you pass is assigned to exact_count in the template. Also you can have further control by passing startindex which controls the start of sequence.
with_sequence generates a sequence based on your input. Click here to read more about it.
with_together loops over parallel set of data. Click here to read more about it.
Using the above ansible loops, we append 01, 02 ... and so on to itg-windows-web text and add it to the instance Name tag.

How to regex replace nested values in Ansible

This question is about looping in Ansible, not about AWS, but for the sake of clarity I will use an AWS deployment problem as an example.
For our deployment scripts I am trying to loop over some clusters in the Amazon EC2 container service. What I will ultimately do is restart each service on the cluster. I am able to restart a service, given it's name. However I need the simple name, not the fully qualified ARN. So I look up the services per cluster and get something like this:
results:
- _ansible_item_result: true
_ansible_no_log: false
_ansible_parsed: true
ansible_facts:
services:
- arn:aws:ecs:eu-central-1:55:service/test-services
changed: false
failed: false
invocation:
module_args:
aws_access_key: null
aws_secret_key: null
cluster: services
details: false
ec2_url: null
profile: null
region: null
security_token: null
service: null
validate_certs: true
item: services
- _ansible_item_result: true
_ansible_no_log: false
_ansible_parsed: true
ansible_facts:
services:
- arn:aws:ecs:eu-central-1:55:service/test-service
- arn:aws:ecs:eu-central-1:55:service/frontend
- arn:aws:ecs:eu-central-1:55:service/beats
changed: false
failed: false
invocation:
module_args:
aws_access_key: null
aws_secret_key: null
cluster: test-service
details: false
ec2_url: null
profile: null
region: null
security_token: null
service: null
validate_certs: true
item: test-service module_args:
aws_access_key: null
aws_secret_key: null
cluster: test-service
details: false
ec2_url: null
profile: null
region: null
security_token: null
service: null
validate_certs: true
item: test-service
Now I want to replace each ARN by the short name of the service. For example: arn:aws:ecs:eu-central-1:55:service/test-service becomes test-service.
After the replacement I can do loop over the services and turn them off by setting the desired count to 0 (later I will turn them back on again):
- name: "Turn services off"
ecs_service:
name: "{{ item[1]}}"
desired_count: 0
task_definition: "{{ taskdefinitions[item[1]] }}"
cluster: "{{ item[0].item }}"
state: present
with_subelements:
- "{{ result.results }}"
- ansible_facts.services
register: turnOffServiceResult
Where taskdefinitions is a simple dict I defined in the playbook:
taskdefinitions:
services:
- test-services
test-xde-worker-service:
- test-service
So after I get the AWS list shown above into a variable result I try to regex replace by doing the following:
- set_fact:
result:
results:
ansible_facts:
services: "{{ result.results.1.ansible_facts.services | map('regex_replace', '.*/(.*?)$', '\\1' ) | list }}"
This works fine, but it obviously only replaces the service names for one cluster and I lose any other fields in the dict ansible_facts. The latter is acceptable, the former not. So here is the question: how can I replace text in a nested list? Another problem would be to skip turning off the services that are not included in taskdefinitions, but that is not the matter at hand.
I'm not aware of any built-in method to modify arbitrary items in complex objects in-place (at least in current Ansible 2.3).
You either select required items from original object (with select, map(attribute=...), json_query, etc) and then modify items in that reduced set/list. In your hypothetical example with JMESPath like result.results[].ansible_facts.services[] to select all services across all clusters and map('regex_replace',... this list.
Or iterate over complex object and apply modification inside a loop, for example:
- name: "Turn services off"
ecs_service:
name: "{{ myname }}"
desired_count: 0
task_definition: "{{ taskdefinitions[myname] }}"
cluster: "{{ mycluster }}"
state: present
vars:
mycluster: "{{ item[0].item }}"
myname: "{{ item[1] | regex_search('[^/]*$') }}"
with_subelements:
- "{{ result.results }}"
- ansible_facts.services

Ansible Register Instances and Create ELBs

I'm trying to create an ansible playbook to dynamically find any instances matching AWS tags, create an ELB and then add the instances to it. So far I have been successful in creating these for one set of tags and one ELB at a time.
I'm trying to figure out the best way to have this run for any number of tags without specifying my variables function and release up front.
For example, the function and release variables would be defined in a vars file something like this.
function:
- api
- webapp
- mysql
release:
- prod
- stage
- dev
My playbook looks like this. I'm struggling to find a way to loop the entire playbook through a variable list. If I add a with_items to the first task it loops that entire task before moving onto the next one which does not accomplish what I want.
- ec2_remote_facts:
filters:
instance-state-name: running
"tag:Function": "{{ function }}"
"tag:Release": "{{ release }}"
region: us-east-1
register: ec2instance
- local_action:
module: ec2_elb_lb
name: "{{ function }}-{{ release }}"
state: present
instance_ids: "{{ item.id }}"
purge_instance_ids: true
region: us-east-1
subnets:
- subnet-1
- subnet-2
listeners:
- protocol: https
load_balancer_port: 443
instance_port: 80
ssl_certificate_id: "{{ ssl_certificate_id }}"
health_check:
ping_protocol: http
ping_port: 80
ping_path: "/status"
response_timeout: 3
interval: 5
unhealthy_threshold: 2
healthy_threshold: 2
access_logs:
interval: 5
s3_location: "{{ function }}-{{ release }}-elb"
s3_prefix: "logs"
with_items: ec2instance.instances
The easiest thing I can think of is parameterized include.
Make a list of tasks for a single shot, e.g. elb_from_tagged_instances.yml.
Then make main.yml with include in a loop:
- include: elb_from_tagged_instances.yml function={{item[0]}} release={{item[1]}}
with_together:
- "{{function}}"
- "{{release}}"
And if you don't need to somehow cross-intersect functions/releases, I'd replace two lists function/release with one list of dict and iterate over it.
UPDATE: Example for nested loop to get 9 pairs:
---
- hosts: localhost
connection: local
vars:
functions:
- api
- webapp
- mysql
releases:
- prod
- stage
- dev
tasks:
- include: include_z1.yml function="{{item[0]}}" release="{{item[1]}}"
with_nested:
- "{{functions}}"
- "{{releases}}"
Also note, that you should use different names for list and parameter (function and functions (plural) in my example) to avoid recursive templating.

Creating n new instances in AWS EC2 VPC and then configuring them

I'm having a really hard time doing what seems like a fairly standard task so I'm hoping somebody can help me. I've googled this like crazy and most of the examples are not in VPC or use deprecated structure that makes them wrong or unusable in my use case.
Here are my goals:
I want to launch a whole mess of new instances in my VPC (the same
code below has 3 but it could be a hundred)
I want to wait for thoseinstances to come alive
I then want to configure those instances (ssh into them, change
hostname, enable some services, etc. etc.)
Now I could probably do this in 2 tasks. I could create the instances in 1 playbook. Wait for them to settle down. Then run a 2nd playbook to configure them. That's probably what I'm going to do now because I want to get moving - but there has to be a one shot answer to this.
Here's what I have so far for a playbook
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 delay=60 timeout=900 state=started
with_items: '{{ec2.instances}}'
- name: Update hostname on instances
hostname: name={{ item.private_ip }}
with_items: '{{ec2.instances}}'
And that doens't work. What I get is
TASK [Wait for SSH to come up] *************************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
TASK [Update hostname on instances] ********************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
Which makes me sad. Now this is my latest incarnation of that playbook. But I've tried to rewrite it using every example I can find on the internet. Most of them have with_items written in a different way, but ansible tells me that way is depricated, and then fails.
So far ansible has been fun and easy, but this is making me want to toss my laptop across the street.
Any suggestions? Should I be using register and with_items at all? Would I be better off using something like this:
add_host: hostname={{item.public_ip}} groupname=deploy
instead? I'm wide open to a rewrite here. I'm going to go write this up in 2 playbooks and would love to get suggestions.
Thanks!
****EDIT****
Now it's just starting to feel broken or seriously changed. I've googled dozens of examples and they all are written the same way and they all fail with the same error. This is my simple playbook now:
---
- hosts: localhost
connection: local
gather_facts: False
vars:
builderstart: 93
builderend: 94
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: dakey
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: True
group_id: sg-OU812
exact_count: 1
count_tag:
Name: "{{ item }}"
instance_tags:
Name: "{{ item }}"
role: "dostuff"
extracheese: "True"
register: ec2
with_sequence: start="{{builderstart}}" end="{{builderend}}" format=builder%03d
- name: the newies
debug: msg="{{ item }}"
with_items: "{{ ec2.instances }}"
It really couldn't be more straight forward. No matter how I write it, no matter how I vary it, I get the same basic error:
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the
future this will be a fatal error.: 'dict object' has no attribute
'instances'.
So it looks like it's the with_items: "{{ ec2.instances }}" line that's causing the error.
I've used debug to print out ec2 and that error looks accurate. It looks like the structure changed to me. It looks like ec2 now contains a dictionary with results as a key to another dictionary object and that instances is a key in that dictionary. But I can't find a sane way to access the data.
For what it's worth, I've tried accessing this in 2.0.1, 2.0.2, and 2.2 and I get the same problem in every case.
Are the rest of you using 1.9 or something? I can't find an example anywhere that works. It's very frustrating.
Thanks again for any help.
Don't do it like this:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
Because by using it you flushing all info from ec2 in your item.
You receiving following output:
TASK [Launch instance] *********************************************************
changed: [localhost] => (item={u'hostname': u'eggroll1'})
changed: [localhost] => (item={u'hostname': u'eggroll2'})
but item should be like this:
changed: [localhost] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-29-85.ec2.internal', u'public_ip': u'54.208.138.217', u'private_ip': u'172.31.29.85', u'id': u'i-003b63636e7ffc27c', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-37581295'}}, u'key_name': u'eggfooyong', u'image_id': u'ami-fce3c696', u'tenancy': u'default', u'groups': {u'sg-aabbcc34': u'ssh'}, u'public_dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'state_code': 16, u'tags': {u'ansibleowned': u'True', u'role': u'supper'}, u'placement': u'us-east-1d', u'ami_launch_index': u'1', u'dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'region': u'us-east-1', u'launch_time': u'2016-04-19T08:19:16.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'})
Try to use following code
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
keypair: eggfooyong
instance_type: t2.micro
security_group: ssh
image: ami-8675309
region: us-east-1
subnet: subnet-8675309
instance_names:
- eggroll1
- eggroll2
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: no
count: "{{ instance_names | length }}"
register: ec2
- name: tag instances
ec2_tag:
resource: '{{ item.0.id }}'
region: '{{ region }}'
tags:
Name: '{{ item.1 }}'
role: "supper"
ansibleowned: "True"
with_together:
- '{{ ec2.instances }}'
- '{{ instance_names }}'
- name: Wait for SSH to come up
wait_for: host={{ private_ip }} port=22 delay=60 timeout=320 state=started
with_items: '{{ ec2.instances }}'
Assumption that your ansible host located inside of VPC
To achieve this goal, I have written a really small filter plugin get_ec2_info.
Create a directory with the named filter_plugins
Create a plugin file get_ec2_info.py with the following content:
from jinja2.utils import soft_unicode
class FilterModule(object):
def filters(self):
return {
'get_ec2_info': get_ec2_info,
}
def get_ec2_info(list, ec2_key):
ec2_info = []
for item in list:
for ec2 in item['instances']:
ec2_info.append(ec2[ec2_key])
return ec2_info
Then you can use this in your playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
- name: Create SSH Group to login dynamically to EC2 Instance(s)
add_host:
hostname: "{{ item }}"
groupname: my_ec2_servers
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
- name: Wait for SSH to come up on EC2 Instance(s)
wait_for:
host: "{{ item }}"
port: 22
state: started
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
# CALL THE DYNAMIC GROUP IN THE SAME PLAYBOOK
- hosts: my_ec2_servers
become: yes
remote_user: ubuntu
gather_facts: yes
tasks:
- name: DO YOUR TASKS HERE
EXTRA INFORMAITON:
using ansible 2.0.1.0
assuming you are spinning up ubuntu instances, if not then change the value in remote_user: ubuntu
assuming ssh key is properly configured
Please consult these github repos for more help:
ansible-aws-role-1
ansible-aws-role-2
I thinks this would be helpful for debug.
https://www.middlewareinventory.com/blog/ansible-dict-object-has-no-attribute-stdout-or-stderr-how-to-resolve/
The ec2 register is a dict type. And it has a key results.
results key has many elements including dict and list like below:
{
"msg": {
"results": [
{
"invocation": {
},
"instances": [],
"changed": false,
"tagged_instances": [
{
}
],
"instance_ids": null,
"failed": false,
"item": [
],
"ansible_loop_var": "item"
}
],
"msg": "All items completed",
"changed": false
},
"_ansible_verbose_always": true,
"_ansible_no_log": false,
"changed": false
}
So, you can get the desired data using ., for instance, item.changed which has false boolean value.
- debug:
msg: "{{ item.changed }}"
loop: "{{ ec2.results }}"

Ansible AWS AutoScale group keep adding the desire server instead of update the correct number

I am using Ansible latest version 2.0.1.0
I try to create new AWS autoscale config like this
- name: Update ASG with new LC and replace all instances
ec2_asg:
name: "{{ asg_name }}"
launch_config_name: "{{ lc_name }}-{{ timestamp }}"
health_check_period: 300
health_check_type: ELB
min_size: 1
max_size: 5
desired_capacity: 1
region: "{{ aws_region }}"
load_balancers: "{{ lb_name }}"
vpc_zone_identifier: "{{ vpc_zones }}"
tags:
- Name: "{{ asg_name }}"
- Environment: "{{ stack_env }}"
replace_all_instances: yes
It creates the autoscale group, but when I rerun the playbook the number of min/max/desired instances in AWS autoscale group will be sum up to 2/5/6. Basically it will sum up the config inside AWS and the setting inside playbook.
The documentation of ec2_asg said it will replace/update the number inside the config. Did I miss something here?
Thank you!
Update
Because I am using replace_all_instances then Ansible will rolling replace all running instances in the ASG. But sometimes it got timeout when waiting new instance up then the playbook exit. So the number of min/max/desire number of instances in ASG was not updated to correct number.