i am very new to ansible and would like to test a few things.
I have a couple of Amazon EC2 instances and would like to install different software components on them. I don't want to have the (plaintext) credentials of the technical users inside of ansible scripts or config files. I know that it is possible to encrypt those files, but I want to try keepass for a central password management tool. So my installation scripts should read the credentials from a .kdbx (Keepass 2) database file before starting the actual installation.
Till now i wrote a basic python script for reading the .kdbx file. The script outputs a json object via:
print json.dumps(inventory, sort_keys=False)
The ouput looks like the following:
{"cdc":
{"cdc_test_server":
{"cdc_test_user":
{"username": "cdc_test_user",
"password": "password"}
}
}
}
Now I want to achieve, that the python script is executed by ansible and the key value pairs of the output are included/registered as ansible variables. So far my playbook looks as follows:
- hosts: 127.0.0.1
connection: local
tasks:
- name: "Test Playboook Functionality"
command: python /usr/local/test.py
register: pass
- debug: var=pass.stdout
- name: "Include json user output"
set_fact: passwords="{{pass.stdout | from_json}}"
- debug: " {{passwords.cdc.cdc_test_server.cdc_test_user.password}} "
The first debug generates the correct json output, but i am not able to include the variables in ansible, so that I can use them via jinja2 notation. set_fact doesn't throw an exception, but the last debug just returns a "Hello world" - message? So my question is: How do I properly include the json key value pairs as ansible variables via task?
See Ansible KeePass Lookup Plugin
ansible_user : "{{ lookup('keepass', 'path/to/entry', 'username') }}"
ansible_become_pass: "{{ lookup('keepass', 'path/to/entry', 'password') }}"
You may want to use facts.d and place your python script there to be available as a fact.
Or write a simple action plugin that returns json object to eliminate the need in stdout->from_json conversion.
Late to the party, but it seems your use case is primarily covered by keepass-inventory. And it doesn't require any playbook "magic". Disclaimer: I contribute to this non-profit.
export KDB_PATH=example.kdbx
export KDB_PASS=example
ansible all --list-hosts -i keepass-inventory.py
Related
I want to find a way to not have to specify aws_access_key and aws_secret_key when use aws modules.
Is that aws default try to use credentials in ~/.aws to run against playbooks?
If yes, how to instruct ansible to use aws credentials under whatever folder you want, e.g: ~/my_ansible_folder.
I ask this because I really want to use ansible to create a vault: cd ~/my_ansible_folder; ansible-vault create aws_keys.yml under ~/my_ansible_folder then run playbook ansible-playbook -i ./inventory --ask-vault-pass site.yml that will use aws credential in the vault that I don't have to specify aws_access_key and aws_secret_key in tasks.. that need to use aws credentials.
The list of boto3 configuration options will interest you, most most notably the $AWS_SHARED_CREDENTIALS_FILE environment variable.
I would expect you can create that shared credentials file using a traditional copy: content="[default]\naws_access_key_id=whatever\netc\netc\n" and then set the ansible_python_interpreter fact to be env AWS_SHARED_CREDENTIALS_FILE=/path/to/that/credential-file /the/original/ansible_python_interpreter to cause the actual python invocation to carry that environment variable with it. For non-boto modules, doing that will just cost you running env as well as python, but to be honest the bizarre module serialization and deserialization that ansible does anyway will cause that extra binary runtime to be invisible in the scheme of things.
You may have to override $AWS_CONFIG_FILE and $BOTO_CONFIG in the same manner, even pointing them at /dev/null in order to force boto to not look in your $HOME/.aws directory
So, for clarity:
- name: create our boto config
copy:
content: |
[default]
aws_access_key_id={{ access_key_from_vault }}
aws_secret_access_key={{ secret_key_from_vault }}
dest: /somewhere/sekrit
mode: '0600'
no_log: yes
register: my_aws_config
- name: grab existing python interp
set_fact:
backup_a_py_i: '{{ ansible_python_interpreter | default(ansible_playbook_python) }}'
- name: patch in our env-vars
set_fact:
ansible_python_interpreter: >-
env AWS_SHARED_CREDENTIALS_FILE={{ my_aws_config.path }}
{{ backup_a_py_i }}
# and away you go!
- ec2_instance_facts:
# optionally put this in a "rescue:" or whatever you think is reasonable
- file: path={{ my_aws_config.path }} state=absent
I'm writing a playbook to validate our Cloud Formation stacks (port 80 open, httpd.conf has correct settings, instance type is correct, etc). The one thing that is tripping me up is how to validate EC2 tags.
key=Name, value=testec2
I've tried the below and changed the when condition multiple different ways.
- name: Check Name Tag
action: debug msg="Name Tag Exists."
when: "ec2_tag_Name"
[Examples tried]
when: "tag_Name_testec2"
when: " ec2_tag_Name_testec2"
when: "ec2_tag_Name"
I've actually tried quite a few more varieties but those are the ones I can easily remember off the top of my head.
when i run "ec2.py --list" it outputs multiple formats of the tag
"ec2_tag_Name": "testec2",
"tag_Name_testec2": [
Any suggestions would be greatly appreciated.
I use tag_Name_testec2 but this is a group in hostsvars. Is not a common variable. To avoid troubles, first change in your ec2.ini, the cache max age, from 20 to 1:
cache_max_age = 1
and see if you have some filter like region or public or private ip.
You could debug you hostvars with this way:
[batman#myhost myproject]$ ansible -i ec2.py tag_Name_webserver -u ec2-user -m debug -a msg="{{ hostvars[inventory_hostname]['ec2_id'] }}" -vvv
Using /etc/ansible/ansible.cfg as config file
10.78.17.117 | SUCCESS => {
"msg": "i-b34cb736"
}
In case anyone is interested, I finally figured it out. Feel free to point and laugh for not noticing "is defined" missing.
name: Check Name Tag Types
action: debug msg="Name tag exists."
when: "ec2_tag_Name is defined"
I would like to include variables from a file on the remote host, rather than the control machine Ansible is running on.
For example I have a file /var/database_credentials.yml (on my webserver)
What's the best way to add variables from that file to hostvars so that I can use them in a template?
The include_vars module only takes files from the control machine. I could use the fetch module but that seems like an unnecessary step.
It should not be hard to integrate that with /etc/ansible/facts.d.
You can store JSON files, INI files or executable scripts in that directory and the content/output will be available as server facts after the setup module was executed.
I'm not sure it will take YAML. You might be lucky and it'll work to simply add a symlink to your file /var/database_credentials.yml. (YAML is not mentioned in the docs but it would be strange if YAML is not supported since pretty much everything in Ansible is based on YAML) If not, you can create a script in the language you prefer which reads that file and outputs a JSON object.
See Local Facts (Facts.d) in the docs.
You can register the remote file to a local variable, then parse it with from_yaml.
- name: "Read yml file"
ansible.builtin.shell: "cat /var/database_credentials.yml"
register: result
- name: "Parse yml into variable"
set_fact:
database_credentials: "{{ result.stdout | from_yaml }}"
How can I launch(purchase) a reserved EC2 instance using Ansible with EC2 module? I've googled using words something like 'ec2 reserved instance ansible' but no joy.
Or should I use AWS CLI instead?
Or you can create Ansible module.
Also there are already created modules that you can use as examples ansible-modules-extras/cloud/amazon.
PS:
Modules can be written in any language and are found in the path
specified by ANSIBLE_LIBRARY or the --module-path command line option.
By default, everything that ships with ansible is pulled from its
source tree, but additional paths can be added.
The directory ”./library”, alongside your top level playbooks, is also
automatically added as a search directory.
I just made a PR which might help you.
You could use it as follows:
- name: Purchase reserved instances
boto3:
name: ec2
region: us-east-1
operation: purchase_reserved_instances_offering
parameters:
ReservedInstancesOfferingId: 9a06095a-bdc6-47fe-a94a-2a382f016040
InstanceCount: 3
LimitPrice:
Amount: 123.0
CurrencyCode: USD
register: result
- debug: var=result
If you're interrested by this feature, feel free to vote up on the PR. :)
I looked into the Cloud module list and found there isn't any modules out of the box that supports reserved instance - I think you try building a wrapper over the AWS CLI or Python Boto SDK [ or any SDK ].
This is the pseudo code for the playbook :
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: 'Calling Python Code to reserve instance'
raw: python reserve-ec2-instance.py args
I am looking for method to set a variable in ansible playbook using inventory information received from dynamic inventory.
For example if we have a sample playbook like
---
- hosts: localhost
connection: local
tasks:
- set_fact: rds_hostname="{{ rds_mysql }}" #set rds endpoint from ec2.py
- debug: var=rds_hostname
I am able to get the endpoint when I run the plain ec2.py script as
"rds_mysql":{
"rds_mysql.shdahfiahfa.us-easy-1.rds.amazon.com"
}
However I wish to set rds_hostname as the endpoint recieved from dynamic_inventory.
Can any one point out my mistake. Thank you
I was able to solve my above problem by using something like this
set_fact: rds_hostname="{{ groups.rds_mysql[0] }}"
Also during my research I found a nice ansible galaxy code which allows you to dump all variables accessible to ansible-playbooks
https://galaxy.ansible.com/list#/roles/646
Hope this helps someone :)