ansible ec2.py / boto EC2 validation - amazon-web-services

I'm writing a playbook to validate our Cloud Formation stacks (port 80 open, httpd.conf has correct settings, instance type is correct, etc). The one thing that is tripping me up is how to validate EC2 tags.
key=Name, value=testec2
I've tried the below and changed the when condition multiple different ways.
- name: Check Name Tag
action: debug msg="Name Tag Exists."
when: "ec2_tag_Name"
[Examples tried]
when: "tag_Name_testec2"
when: " ec2_tag_Name_testec2"
when: "ec2_tag_Name"
I've actually tried quite a few more varieties but those are the ones I can easily remember off the top of my head.
when i run "ec2.py --list" it outputs multiple formats of the tag
"ec2_tag_Name": "testec2",
"tag_Name_testec2": [
Any suggestions would be greatly appreciated.

I use tag_Name_testec2 but this is a group in hostsvars. Is not a common variable. To avoid troubles, first change in your ec2.ini, the cache max age, from 20 to 1:
cache_max_age = 1
and see if you have some filter like region or public or private ip.
You could debug you hostvars with this way:
[batman#myhost myproject]$ ansible -i ec2.py tag_Name_webserver -u ec2-user -m debug -a msg="{{ hostvars[inventory_hostname]['ec2_id'] }}" -vvv
Using /etc/ansible/ansible.cfg as config file
10.78.17.117 | SUCCESS => {
"msg": "i-b34cb736"
}

In case anyone is interested, I finally figured it out. Feel free to point and laugh for not noticing "is defined" missing.
name: Check Name Tag Types
action: debug msg="Name tag exists."
when: "ec2_tag_Name is defined"

Related

Ansible Keepass integration via python script

i am very new to ansible and would like to test a few things.
I have a couple of Amazon EC2 instances and would like to install different software components on them. I don't want to have the (plaintext) credentials of the technical users inside of ansible scripts or config files. I know that it is possible to encrypt those files, but I want to try keepass for a central password management tool. So my installation scripts should read the credentials from a .kdbx (Keepass 2) database file before starting the actual installation.
Till now i wrote a basic python script for reading the .kdbx file. The script outputs a json object via:
print json.dumps(inventory, sort_keys=False)
The ouput looks like the following:
{"cdc":
{"cdc_test_server":
{"cdc_test_user":
{"username": "cdc_test_user",
"password": "password"}
}
}
}
Now I want to achieve, that the python script is executed by ansible and the key value pairs of the output are included/registered as ansible variables. So far my playbook looks as follows:
- hosts: 127.0.0.1
connection: local
tasks:
- name: "Test Playboook Functionality"
command: python /usr/local/test.py
register: pass
- debug: var=pass.stdout
- name: "Include json user output"
set_fact: passwords="{{pass.stdout | from_json}}"
- debug: " {{passwords.cdc.cdc_test_server.cdc_test_user.password}} "
The first debug generates the correct json output, but i am not able to include the variables in ansible, so that I can use them via jinja2 notation. set_fact doesn't throw an exception, but the last debug just returns a "Hello world" - message? So my question is: How do I properly include the json key value pairs as ansible variables via task?
See Ansible KeePass Lookup Plugin
ansible_user : "{{ lookup('keepass', 'path/to/entry', 'username') }}"
ansible_become_pass: "{{ lookup('keepass', 'path/to/entry', 'password') }}"
You may want to use facts.d and place your python script there to be available as a fact.
Or write a simple action plugin that returns json object to eliminate the need in stdout->from_json conversion.
Late to the party, but it seems your use case is primarily covered by keepass-inventory. And it doesn't require any playbook "magic". Disclaimer: I contribute to this non-profit.
export KDB_PATH=example.kdbx
export KDB_PASS=example
ansible all --list-hosts -i keepass-inventory.py

Ansible - On error, exit role and run cleanup

I'm trying to spin up an AWS deployment environment in Ansible, and I want to make it so that if something fails along the way, Ansible tears down everything on AWS that has been spun up so far. I can't figure out how to get Ansible to throw an error within the role
For example:
<main.yml>
- hosts: localhost
connection: local
roles:
- make_ec2_role
- make_rds_role
- make_s3_role
2. Then I want it to run some code based on that error here.
<make_rds_role>
- name: "Make it"
- rds:
params: etc <-- 1. Let's say it fails in the middle here
I've tried:
- name: this command prints FAILED when it fails
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
As well as other things on within the documentation, but what I really want is just a way to use something like the "block" and "rescue" commands , but as far as I can tell that only works within the same book and on plays, not roles. Does anyone have a good way to do this?
Wrap tasks inside your roles into block/rescue thing.
Make sure that rescue block has at least one task – this way Ansible will not mark the host as failed.
Like this:
- block:
- name: task 1
... # something bad may happen here
- name: task N
rescue:
- assert: # we need a dummy task here to prevent our host from being failed
that: ansible_failed_task is defined
Recent versions of Ansible register ansible_failed_task and ansible_failed_result when hit rescue block.
So you can do some post_tasks in your main.yml playbook like this:
post_tasks:
- debug:
msg: "Failed task: {{ ansible_failed_task }}, failed result: {{ ansible_failed_result }}"
when: ansible_failed_task is defined
But be warned that this trick will NOT prevent other roles from executing.
So in your example if make_rds_role fails ansible will apply make_s3_role and run your post_tasks afterwards.
If you need to prevent it, add some checking for ansible_failed_task fact in the beginning of each role or something.

Trouble with Ansible ec2 dynamic inventory

I'm using ansible to configure and deploy several servers in ec2. Since these servers are frequently changing I'd like to use dynamic inventory. I have set up ec2.py and ec2.ini in my jenkins server (this is where the ansible scripts are run) but am running into an issue when I run the playbook:
ERROR! Specified --limit does not match any hosts
Which clearly means that my hosts are not being selected correctly. When I run:
./ec2.py --list >> aws_example.json
everything looks good in aws_example.json.
I'm trying to select servers based on two tags, Name and environment. For example, I have a server with a 'Name' tag of 'api' and an 'environment' tag of 'production'.
I've set up the destination_format_tags like so:
destination_format_tags = Name,environment
and run ansible as follows:
ansible-playbook site.yml -i ec2.py -l api
I've also tried changing the hostname_variable:
hostname_variable = tag_Name.tag_environment
and running the command like so:
ansible-playbook site.yml -i ec2.py -l api.production
Additionally, I've also tried using only one tag with the hostname_variable:
hostname_variable = tag_Name
and running the command like so:
ansible-playbook site.yml -i ec2.py -l api
None of these configurations work. I'm also unable to find much documentation about these setting so I'm not sure how to correctly configure it. Can anyone point me in the right direction?
So the problem was how I was representing my host names in my playbook. Setting the hostname variable was the right thing to do:
hostname_variable = tag_Name
And here's how to represent it in the playbook:
- name: configure and deploy api servers
hosts: tag_Name_api
remote_user: ec2-user
sudo: true
roles:
- java
- nginx
- api
Additionally, it'll need to be called like so:
ansible-playbook site.yml -i ec2.py -l tag_Name_api
Make sure to change special characters such as . or - to _.

Ansible can't resolve EC2 tag if specified in static inventory

I am using Ansible to deploy to Amazon EC2, and I have ec2.py and ec2.ini set up such that I can retrieve a list of servers from Amazon. I have my server at AWS tagged rvmdocker:production, and ansible all --list returns my tag as ec2_tag_rvmdocker_production. I can also run:
ansible -m ping tag_rvmdocker_production`
and it works. But if I have that tag in a static inventory file, and run:
ansible all -m ping -i production
it returns:
tag_rvmdocker_production | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! SSH encountered an unknown error during the connection. Werecommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue",
"unreachable": true
}
Here is my production inventory file:
[dockerservers]
tag_rvmdocker_production
It looks like Ansible can't resolve tag_rvmdocker_production when it's in the static inventory file.
UPDATE
I followed ydaetskcoR's advice and am now getting a new error message:
$ ansible-playbook -i production app.yml
ERROR! ERROR! production:2: Section [dockerservers:children] includes undefined group: tag_rvmdocker_production
But I know the tag exists, and it seems like Ansible and ec2.py know it:
$ ansible tag_rvmdocker_production --list
hosts (1):
12.34.56.78
Here is my production inventory:
[dockerservers:children]
tag_rvmdocker_production
And my app.yml playbook file:
---
- name: Deploy RVM app to production
hosts: dockerservers
remote_user: ec2-user
become: true
roles:
- ec2
- myapp
In the end, I'd love to be able to run the same playbook against development (a VM on my Mac), staging, or production, to start an environment. My thought was to have static inventory files that pointed to tags or groups on EC2. Am I even approaching this the right way?
I had a similar issue to this, and resolved it as follows.
First, I created a folder to contain my inventory files, and put in there a symlink to my /etc/ec2.ini, a copy (or symlink) to the ec2.py script (with executable status), and a hosts file as follows.
$ ls amg-dev/*
amg-dev/ec2.ini -> /etc/ec2.ini
amg-dev/ec2.py
amg-dev/hosts
My EC2 instances are tagged with a Type = amg_dev_web
The hosts file contains the following information - the blank first entry is important here.
[tag_Type_amg_dev_web]
[webservers:children]
tag_Type_amg_dev_web
[all:children]
webservers
Then when I run ansible-playbook I specify the name of the folder only as the inventory which makes Ansible read the hosts file, and execute the ec2.py script to interrogate AWS.
ansible-playbook -i amg-dev/ playbook.yml
Inside my playbook, I refer to these as webservers as follows
- name: WEB | Install and configure relevant packages
hosts: webservers
roles:
- common
- web
Which seems to work as expected.
As discussed in the comments, it looks like you've misunderstood the use of tags in a dynamic inventory.
The AWS EC2 dynamic inventory script allows you to target groups of servers by a tag key/value combination. So to target your web servers you may have a tag called Role that in this case is set to web which you would then target as a dynamic group with tag_Role_web.
You can also have static groups that contain children dynamic groups. This is much the same as how you use groups of groups normally in an inventory file that might be used like this:
[web-servers:children]
front-end-web-servers
php-web-servers
[front-end-web-servers]
www-web-1
www-web-2
[php-web-servers]
php-web-1
php-web-2
Which would allow you to generically target or set group variables for all of the web servers above simply by using the more generic web-servers group and then specifically configure the types of web servers using the more specific groups of either front-end-web-servers or php-web-servers.
However, if you put an entry under a group where it isn't defined as a child group then Ansible will assume that this is a host and will then attempt to connect to that host directly.
If you have a uniquely tagged instance that you are trying to reach via dynamic inventory then you simply use it as if it was a group (it just happens to currently only have one instance in it).
So if you want to target or set variables for the dockerservers group which then includes an instance that is tagged with the key-pair combination of rvmdocker: production then you would just do this:
[dockerservers:children]
tag_rvmdocker_production
[tag_rvmdocker_production]

How to set a variable using dynamic inventory using Ansible

I am looking for method to set a variable in ansible playbook using inventory information received from dynamic inventory.
For example if we have a sample playbook like
---
- hosts: localhost
connection: local
tasks:
- set_fact: rds_hostname="{{ rds_mysql }}" #set rds endpoint from ec2.py
- debug: var=rds_hostname
I am able to get the endpoint when I run the plain ec2.py script as
"rds_mysql":{
"rds_mysql.shdahfiahfa.us-easy-1.rds.amazon.com"
}
However I wish to set rds_hostname as the endpoint recieved from dynamic_inventory.
Can any one point out my mistake. Thank you
I was able to solve my above problem by using something like this
set_fact: rds_hostname="{{ groups.rds_mysql[0] }}"
Also during my research I found a nice ansible galaxy code which allows you to dump all variables accessible to ansible-playbooks
https://galaxy.ansible.com/list#/roles/646
Hope this helps someone :)