Ansible not executing rds_instance task due to wrong variable evaluation - amazon-web-services

I am struggeling with ansible since a task defined a playbook is not being executed to manage an AWS rds instance.
This is the command I execute within an jenkins pipeline:
state: "running"
identifier: "myDatabase"
sh "ansible-playbook ${env.WORKSPACE}/cost-optimization/ansible/manage_rds.yml --extra-vars 'instanceState=${state} identifier=${dbsIdentifier}'"
The playbook looks like this:
manage_rds.yml:
---
- hosts: localhost
vars:
rdsState: "{{instanceState}}"
rdsIdentifier: "{{identifier|lower}}"
tasks:
- name: "Starting RDS instances"
rds_instance:
state: running
db_instance_identifier: "{{ rdsIdentifier }}"
wait: yes
register: rds_result
when: rdsState == "running"
- name: "Stopping RDS instances"
rds_instance:
state: stopping
db_instance_identifier: "{{ rdsIdentifier }}"
wait: yes
register: rds_result
when: rdsState == "stopped"
- name: Show RDS result
debug:
var: rds_result
- import_tasks: tasks/task_create_partial_report.yml
vars:
identifier: "{{rdsIdentifier|lower}}"
partial: "db"
I expect the AWS RDS instance is being spinned up.
Instead the result looks like this:
TASK [Starting RDS instances] **************************************************
ok: [localhost]
TASK [Stopping RDS instances] **************************************************
skipping: [localhost]
TASK [Show RDS result] *********************************************************
ok: [localhost] => {
"rds_result": {
"changed": false,
"skip_reason": "Conditional result was False",
"skipped": true
}
}
Any idea how to solve this?
EDIT:
I followed the recommendation from below.
However, the RDS instance is still not affected:
+ ansible-playbook /var/lib/jenkins/workspace/.../ansible/manage_rds.yml --extra-vars 'instanceState=running identifier=myDatabase'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [Starting RDS instances] **************************************************
ok: [localhost]
TASK [Show RDS result] *********************************************************
ok: [localhost] => {
"rds_result": {
"allocated_storage": 20,
"associated_roles": [],
"auto_minor_version_upgrade": false,
"availability_zone": "eu-central-1a",
"backup_retention_period": 21,
"ca_certificate_identifier": "rds-ca-2015",
"changed": false,
"character_set_name": "SQL_Latin1_General_CP1_CI_AS",
"copy_tags_to_snapshot": true,
"db_instance_arn": "arn:aws:rds:mydatabase",
"db_instance_class": "db.t2.micro",
"db_instance_identifier": "mydatabase",
"db_instance_port": 0,
"db_instance_status": "stopped",
"db_parameter_groups": [
{
"db_parameter_group_name": "....-sqlserver-ex-14-00",
"parameter_apply_status": "in-sync"
}
],
...
So the state remains stopped althoug it is set to running.

register will register the result of a task, what ever that is even, if it is skipped.
In your case, you run the first task when: rdsState == "running", register the result of that successful run and immediately override your registered var with a skipped task result.
There are several solutions to go other this. I'll just give 2 out of my hat here:
Debug the var after each task
You could add the following task after each rds_instance call
- name: Show RDS result
debug:
var: rds_result
when: rds_result is not skipped
That would only print out after each task if it was actually run.
Run a single parameterized task
You need a mapping between your variable value and the one expected in the module params. Add the following to your vars:
my_rds_state:
running:
name: started
description: Starting
stopped:
name: stopped
description: Stopping
Then you can have a single task that will do the job whatever the param:
- name: "{{ my_rds_state[rdsState].description }} RDS instances"
rds_instance:
state: "{{ my_rds_state[rdsState].name }}"
db_instance_identifier: "{{ rdsIdentifier }}"
wait: yes
register: rds_result
- name: Show RDS result
debug:
var: rds_result

Related

Creating AMI using ec2_ami with Ansible

I am trying to create an AMI from an EC2. However, before doing so I would like to check if the AMI with the same name exists. If it does, I would like to deregister it before attempting to create the AMI with the given name.
Issue1: How do I run AMI deregister ONLY if the AMI already exists.
Issue2: When the deregister call has been madem, how do I wait for before creating the AMI with the same name?
Here is what I have so far
- name: Check if AMI with the same name exists
ec2_ami_find:
name: "{{ ami_name }}"
register: ami_find
- name: Deregister AMI if it exists
ec2_ami:
image_id: "{{ ami_find.results[0].ami_id }}"
state: absent
when: ami_find.results[0].state == 'available'
- pause:
minutes: 5
- name: Creating the AMI from of the instance
ec2_ami:
instance_id: "{{ item.id }}"
wait: yes
name: "{{ ami_name }}"
delegate_to: 127.0.0.1
with_items: "{{ ec2.instances }}"
register: image
EDIT:
I am able to deregister the AMI when the state is 'available' and wait for a few minutes before attempting to create the new AMI (which has the same name). However, sometimes I get the following response. In which case I would like to continue with creating AMI.
TASK [createAMI : Check if AMI with the same name exists] **********************
ok: [local] => {"changed": false, "results": []}
First check if the result is not empty and then check the state.
when: ami_find.results | length and ami_find.results[0].state == 'available'
Thanks to the comment above, I managed to add the following to the Deregister task and managed to deal with the empty response.
- name: Check if AMI with the same name exists
ec2_ami_find:
name: "{{ ami_name }}"
register: ami_find
- name: Deregister AMI if it exists
ec2_ami:
image_id: "{{ ami_find.results[0].ami_id }}"
state: absent
when: ami_find.results | length and ami_find.results[0].state == 'available'

ansible: Extract value from the register variables to use it in other plays within same playbook

I'm setting up complete environment using ansible. For some reason, ansible is not picking up variable values.
I'm using ansible 2.1.1.0
Here's a strip example of what I'm trying to do:
I have registered my vpc with register: ec2_vpc.
1. #This didn't works
- name: Add to host vars
add_host:
name: vpc_vars
groups: vpc_subnets
vpc_subnet_id: "{{ ec2_vpc.subnets[0].id }}"
vpcid: "{{ ec2_vpc.vpc_id }}"
- debug: var=vpc_subnet_id
- debug: var=vpcid
2. These works
- name: Record vpc id
debug: var=ec2_vpc.vpc_id
- name: Record subnet id
debug: var=ec2_vpc.subnets[0].id
Resulted json of my above strip:
TASK [debug] *******************************************************************
ok: [localhost] => {
"vpc_subnet_id": "VARIABLE IS NOT DEFINED!"
}
TASK [debug] *******************************************************************
ok: [localhost] => {
"vpcid": "VARIABLE IS NOT DEFINED!"
}
TASK [Record vpc id] ***********************************************************
ok: [localhost] => {
"ec2_vpc.vpc_id": "vpc-4sdh3832f"
}
TASK [Record subnet id] ********************************************************
ok: [localhost] => {
"ec2_vpc.subnets[0].id": "subnet-edfjdh3482"
}
Why is my first syntax not picking the value instead its giving VARIABLE IS NOT DEFINED!
Updated: Here my 2nd syntax describes I am correctly sorting out the value from the JSON result of registered variable. But I want it work for my 1st syntax which means I want to add hosts variables to dynamic inventory. So that I can reuse it in another play
add_host dynamically adds host to your inventory.
I guess you just need set_fact:
- name: Add to host vars
set_fact:
vpc_subnet_id: "{{ ec2_vpc.subnets[0].id }}"
vpcid: "{{ ec2_vpc.vpc_id }}"
- debug: var=vpc_subnet_id
- debug: var=vpcid
As you said that you have register your return value/result in ec2_vpc then how you can get it vpc_subnet_id or vpcid. you want to get it through this then you have to do like this:
- set_fact:
vpc_subnet_id: "{{ ec2_vpc.subnets[0].id }}"
vpcid: "{{ ec2_vpc.vpc_id }}"
Hope that help you.

Create EC2 instance by Ansible with aws credentials

I followed these 3 guides:
http://docs.ansible.com/ansible/guide_aws.html
http://docs.ansible.com/ansible/ec2_module.html
https://gist.github.com/tristanfisher/e5a306144a637dc739e7
and I wrote this Ansible play
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- include_vars: aws_credentials.yml
- name: Creating EC2 Ubuntu instance
ec2:
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
key_name: my-aws-key
zone: us-west-2a
vpc_subnet_id: subnet-04199d61
group_id: sg-cf6736aa
assign_public_ip: yes
count: 1
wait: true
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: 10
instance_tags:
Name: ansible-test
Project: test
Ansible: manageable
register: ec2
then I run ansible-playbook create-ec2.yml -v --private-key ~/.ssh/my-key --vault-password-file ~/.password/to_ansible_vault
and I was getting this message
PLAY [localhost] ***************************************************************
TASK [include_vars] ************************************************************
ok: [localhost] => {"ansible_facts": {"ec2_access_key": "decrypted_acces_key_XXXXX", "ec2_secret_key": "decrypted_secret_key_XXXXX"}, "changed": false}
TASK [Creating EC2 Ubuntu instance] ********************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'create-ec2.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
when I ran ansible-vault view aws_credentials.yml --vault-password-file ~/.password/to_ansible_vault I got readable content of encrypted aws_credentials.yml,
something like this :
---
ec2_access_key: "XXXXX"
ec2_secret_key: "XXXXX"
Also when I used plain aws_credentials.yml, it doesn't work. Only when I export my credentials, it works without any failure.
Could somebody help me, how can I write playbook for creating ec2 instance with credentials stored in encrypted file?
I think you should supply your keys directly to ec2 module in this case.
Try this:
- name: Creating EC2 Ubuntu instance
ec2:
aws_access_key: "{{ ec2_access_key }}"
aws_secret_key: "{{ ec2_secret_key }}"
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
...
The code suggests that it only checks module's arguments and environment variables, not host variables.
Also you can export your AWS API keys to OS environment variables, like a:
export AWS_ACCESS_KEY=XXXXXXX
In that case in Ansible scenario you need to set:
- name: Creating EC2 Ubuntu instance
ec2:
aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY') }}"
aws_secret_key: "{{ lookup('env', 'AWS_SECRET_KEY') }}"
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2

AWS provision with ansible

I am getting and error when I want to provision an ec2. This is how i set up my environment.
I put my aws credentials in ~/.boto
cat /etc/ansible/hosts
[local]
localhost
cat /etc/ansible/ec2-vars/testserver.yml
ec2_keypair: "ansible"
ec2_security_group: "sg-*******"
ec2_instance_type: "t2.micro"
ec2_image: "ami-********"
ec2_subnet_ids: ['subnet-*******','subnet-REDACTED','subnet-REDACTED']
ec2_region: "us-east-1"
ec2_tag_Name: "testserver"
ec2_tag_Type: "testserver"
ec2_tag_Environment: "development"
ec2_volume_size: 8
cat /etc/ansible/provision-ec2.yml
---
- hosts: localhost
connection: local
gather_facts: false
user: root
pre_tasks:
- include_vars: ec2_vars/{{type}}.yml
roles:
- provision-ec2
cat /etc/ansible/roles/provision-ec2/tasks/main.yml
---
- name: Provision EC2 Box
local_action:
module: ec2
key_name: "{{ ec2_keypair }}"
group_id: "{{ ec2_security_group }}"
instance_type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
vpc_subnet_id: "{{ ec2_subnet_ids|random }}"
region: "{{ ec2_region }}"
instance_tags: '{"Name":"{{ec2_tag_Name}}","Type":" {{ec2_tag_Type}}","Environment":"{{ec2_tag_Environment}}"}'
assign_public_ip: yes
wait: true
count: 1
volumes:
- device_name: /dev/sda1
device_type: gp2
volume_size: "{{ ec2_volume_size }}"
delete_on_termination: true
register: ec2
- debug: var=item
with_items: ec2.instances
- add_host: name={{ item.public_ip }} >
groups=tag_Type_{{ec2_tag_Type}},tag_Environment_{{ec2_tag_Environment}}
ec2_region={{ec2_region}}
ec2_tag_Name={{ec2_tag_Name}}
ec2_tag_Type={{ec2_tag_Type}}
ec2_tag_Environment={{ec2_tag_Environment}}
ec2_ip_address={{item.public_ip}}
with_items: ec2.instances
- name: Wait for the instances to boot by checking the ssh port
wait_for: host={{item.public_ip}} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
Now I run the following command and this is what i get.
[root#ip-**-**-*** ansible]# ansible-playbook -vv -i localhost, -e "type=testservers" provision-ec2.yml
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: provision-ec2.yml ****************************************************
1 plays in provision-ec2.yml
PLAY [localhost] ***************************************************************
TASK [include_vars] ************************************************************
task path: /etc/ansible/provision-ec2.yml:7
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "file": "/etc/ansible/ec2_vars/testservers.yml", "msg": "Source file not found."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #provision-ec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
please help.
New error:
TASK [provision-ec2 : Provision EC2 Box] ***************************************
task path: /etc/ansible/roles/provision-ec2/tasks/main.yml:2
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #provision-ec2.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
You are mixing underscore and hyphen.
cat /etc/ansible/ec2-vars/testserver.yml
include_vars: ec2_vars/{{type}}.yml

Creating n new instances in AWS EC2 VPC and then configuring them

I'm having a really hard time doing what seems like a fairly standard task so I'm hoping somebody can help me. I've googled this like crazy and most of the examples are not in VPC or use deprecated structure that makes them wrong or unusable in my use case.
Here are my goals:
I want to launch a whole mess of new instances in my VPC (the same
code below has 3 but it could be a hundred)
I want to wait for thoseinstances to come alive
I then want to configure those instances (ssh into them, change
hostname, enable some services, etc. etc.)
Now I could probably do this in 2 tasks. I could create the instances in 1 playbook. Wait for them to settle down. Then run a 2nd playbook to configure them. That's probably what I'm going to do now because I want to get moving - but there has to be a one shot answer to this.
Here's what I have so far for a playbook
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 delay=60 timeout=900 state=started
with_items: '{{ec2.instances}}'
- name: Update hostname on instances
hostname: name={{ item.private_ip }}
with_items: '{{ec2.instances}}'
And that doens't work. What I get is
TASK [Wait for SSH to come up] *************************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
TASK [Update hostname on instances] ********************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
Which makes me sad. Now this is my latest incarnation of that playbook. But I've tried to rewrite it using every example I can find on the internet. Most of them have with_items written in a different way, but ansible tells me that way is depricated, and then fails.
So far ansible has been fun and easy, but this is making me want to toss my laptop across the street.
Any suggestions? Should I be using register and with_items at all? Would I be better off using something like this:
add_host: hostname={{item.public_ip}} groupname=deploy
instead? I'm wide open to a rewrite here. I'm going to go write this up in 2 playbooks and would love to get suggestions.
Thanks!
****EDIT****
Now it's just starting to feel broken or seriously changed. I've googled dozens of examples and they all are written the same way and they all fail with the same error. This is my simple playbook now:
---
- hosts: localhost
connection: local
gather_facts: False
vars:
builderstart: 93
builderend: 94
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: dakey
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: True
group_id: sg-OU812
exact_count: 1
count_tag:
Name: "{{ item }}"
instance_tags:
Name: "{{ item }}"
role: "dostuff"
extracheese: "True"
register: ec2
with_sequence: start="{{builderstart}}" end="{{builderend}}" format=builder%03d
- name: the newies
debug: msg="{{ item }}"
with_items: "{{ ec2.instances }}"
It really couldn't be more straight forward. No matter how I write it, no matter how I vary it, I get the same basic error:
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the
future this will be a fatal error.: 'dict object' has no attribute
'instances'.
So it looks like it's the with_items: "{{ ec2.instances }}" line that's causing the error.
I've used debug to print out ec2 and that error looks accurate. It looks like the structure changed to me. It looks like ec2 now contains a dictionary with results as a key to another dictionary object and that instances is a key in that dictionary. But I can't find a sane way to access the data.
For what it's worth, I've tried accessing this in 2.0.1, 2.0.2, and 2.2 and I get the same problem in every case.
Are the rest of you using 1.9 or something? I can't find an example anywhere that works. It's very frustrating.
Thanks again for any help.
Don't do it like this:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
Because by using it you flushing all info from ec2 in your item.
You receiving following output:
TASK [Launch instance] *********************************************************
changed: [localhost] => (item={u'hostname': u'eggroll1'})
changed: [localhost] => (item={u'hostname': u'eggroll2'})
but item should be like this:
changed: [localhost] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-29-85.ec2.internal', u'public_ip': u'54.208.138.217', u'private_ip': u'172.31.29.85', u'id': u'i-003b63636e7ffc27c', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-37581295'}}, u'key_name': u'eggfooyong', u'image_id': u'ami-fce3c696', u'tenancy': u'default', u'groups': {u'sg-aabbcc34': u'ssh'}, u'public_dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'state_code': 16, u'tags': {u'ansibleowned': u'True', u'role': u'supper'}, u'placement': u'us-east-1d', u'ami_launch_index': u'1', u'dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'region': u'us-east-1', u'launch_time': u'2016-04-19T08:19:16.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'})
Try to use following code
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
keypair: eggfooyong
instance_type: t2.micro
security_group: ssh
image: ami-8675309
region: us-east-1
subnet: subnet-8675309
instance_names:
- eggroll1
- eggroll2
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: no
count: "{{ instance_names | length }}"
register: ec2
- name: tag instances
ec2_tag:
resource: '{{ item.0.id }}'
region: '{{ region }}'
tags:
Name: '{{ item.1 }}'
role: "supper"
ansibleowned: "True"
with_together:
- '{{ ec2.instances }}'
- '{{ instance_names }}'
- name: Wait for SSH to come up
wait_for: host={{ private_ip }} port=22 delay=60 timeout=320 state=started
with_items: '{{ ec2.instances }}'
Assumption that your ansible host located inside of VPC
To achieve this goal, I have written a really small filter plugin get_ec2_info.
Create a directory with the named filter_plugins
Create a plugin file get_ec2_info.py with the following content:
from jinja2.utils import soft_unicode
class FilterModule(object):
def filters(self):
return {
'get_ec2_info': get_ec2_info,
}
def get_ec2_info(list, ec2_key):
ec2_info = []
for item in list:
for ec2 in item['instances']:
ec2_info.append(ec2[ec2_key])
return ec2_info
Then you can use this in your playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
- name: Create SSH Group to login dynamically to EC2 Instance(s)
add_host:
hostname: "{{ item }}"
groupname: my_ec2_servers
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
- name: Wait for SSH to come up on EC2 Instance(s)
wait_for:
host: "{{ item }}"
port: 22
state: started
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
# CALL THE DYNAMIC GROUP IN THE SAME PLAYBOOK
- hosts: my_ec2_servers
become: yes
remote_user: ubuntu
gather_facts: yes
tasks:
- name: DO YOUR TASKS HERE
EXTRA INFORMAITON:
using ansible 2.0.1.0
assuming you are spinning up ubuntu instances, if not then change the value in remote_user: ubuntu
assuming ssh key is properly configured
Please consult these github repos for more help:
ansible-aws-role-1
ansible-aws-role-2
I thinks this would be helpful for debug.
https://www.middlewareinventory.com/blog/ansible-dict-object-has-no-attribute-stdout-or-stderr-how-to-resolve/
The ec2 register is a dict type. And it has a key results.
results key has many elements including dict and list like below:
{
"msg": {
"results": [
{
"invocation": {
},
"instances": [],
"changed": false,
"tagged_instances": [
{
}
],
"instance_ids": null,
"failed": false,
"item": [
],
"ansible_loop_var": "item"
}
],
"msg": "All items completed",
"changed": false
},
"_ansible_verbose_always": true,
"_ansible_no_log": false,
"changed": false
}
So, you can get the desired data using ., for instance, item.changed which has false boolean value.
- debug:
msg: "{{ item.changed }}"
loop: "{{ ec2.results }}"