I want to find a way to not have to specify aws_access_key and aws_secret_key when use aws modules.
Is that aws default try to use credentials in ~/.aws to run against playbooks?
If yes, how to instruct ansible to use aws credentials under whatever folder you want, e.g: ~/my_ansible_folder.
I ask this because I really want to use ansible to create a vault: cd ~/my_ansible_folder; ansible-vault create aws_keys.yml under ~/my_ansible_folder then run playbook ansible-playbook -i ./inventory --ask-vault-pass site.yml that will use aws credential in the vault that I don't have to specify aws_access_key and aws_secret_key in tasks.. that need to use aws credentials.
The list of boto3 configuration options will interest you, most most notably the $AWS_SHARED_CREDENTIALS_FILE environment variable.
I would expect you can create that shared credentials file using a traditional copy: content="[default]\naws_access_key_id=whatever\netc\netc\n" and then set the ansible_python_interpreter fact to be env AWS_SHARED_CREDENTIALS_FILE=/path/to/that/credential-file /the/original/ansible_python_interpreter to cause the actual python invocation to carry that environment variable with it. For non-boto modules, doing that will just cost you running env as well as python, but to be honest the bizarre module serialization and deserialization that ansible does anyway will cause that extra binary runtime to be invisible in the scheme of things.
You may have to override $AWS_CONFIG_FILE and $BOTO_CONFIG in the same manner, even pointing them at /dev/null in order to force boto to not look in your $HOME/.aws directory
So, for clarity:
- name: create our boto config
copy:
content: |
[default]
aws_access_key_id={{ access_key_from_vault }}
aws_secret_access_key={{ secret_key_from_vault }}
dest: /somewhere/sekrit
mode: '0600'
no_log: yes
register: my_aws_config
- name: grab existing python interp
set_fact:
backup_a_py_i: '{{ ansible_python_interpreter | default(ansible_playbook_python) }}'
- name: patch in our env-vars
set_fact:
ansible_python_interpreter: >-
env AWS_SHARED_CREDENTIALS_FILE={{ my_aws_config.path }}
{{ backup_a_py_i }}
# and away you go!
- ec2_instance_facts:
# optionally put this in a "rescue:" or whatever you think is reasonable
- file: path={{ my_aws_config.path }} state=absent
Related
I'm using the aws_ec2 plugin to get my inventory on AWS but I need some help.
I want to set the 'ansible_user' and 'ansible_ssh_private_key_file' on the dynamic inventory file but I cant get it work. ¿Is this possible? So I don't need to set the '--private-key' and '-u' options on the command line.
This is my current aws_ec2.yaml:
---
plugin: aws_ec2
aws_access_key: 123
aws_secret_key: 345
filters:
tag:Cliente: CustName
instance-state-name : running
Any Idea?
Thanks!
You can create and load dynamic variables for each Ansible host group. You need to create appropriate files on your inventory directory. For example: Say you have configured your ansible.cfg file with the inventory key pointing to the relative path ./inventory. This tells Ansible that it should look inside a file called ./inventory or a series of files inside the ./inventory folder for the host group's information.
You tell Ansible to load different variables for each group just by following the appropriate convention for the folder structure:
./inventory/group_vars: will hold group variables.
./inventory/host_vars: will hold host variables.
Ansible will use the file's name inside each of these folders to reference the appropriate group or host. You can also use sub-directories with the group's name if you want to use multiple files to hold all the variables.
It's important that your aws_ec2.yml file be located inside the ./inventory directory.
For example: if you wanted to store the appropriate user and key configuration to access EC2 instances tagged with the Project tag set to stackoverflow, you would need to create a directory at ./inventory/group_vars/tag_Project_stackoverflow with a variables file like the following:
ansible_user: ec2-user
ansible_ssh_private_key_file: ~/.ssh/id_rsa
The EC2 dynamic inventory module can create dynamic groups from the configuration of your EC2 instances. Check its documentation to see how to configure it.
You can even create these files dynamically using tasks. Here I create a new ec2 key, store it locally, and create the necessary folder structure to hold the connection information:
- name: Create a new EC2 key
amazon.aws.ec2_key:
name: "{{ ec2_key_name }}"
register: ec2_key_output
- name: Save private key
ansible.builtin.copy:
content: "{{ ec2_key_output.key.private_key }}"
dest: "{{ ec2_key_path }}"
mode: 0600
when: ec2_key_output.changed == True
- name: Create the group_vars folder
ansible.builtin.file:
path: ./inventory/group_vars
state: directory
mode: '0755'
- name: Create the group_vars configuration file
ansible.builtin.copy:
content: |
ansible_user: "{{ ec2_user }}"
ansible_ssh_private_key_file: "{{ ec2_key_path }}"
dest: ./inventory/group_vars/tag_Project_stackoverflow
Please check out Ansible's documentation regarding inventory management for more information.
i am very new to ansible and would like to test a few things.
I have a couple of Amazon EC2 instances and would like to install different software components on them. I don't want to have the (plaintext) credentials of the technical users inside of ansible scripts or config files. I know that it is possible to encrypt those files, but I want to try keepass for a central password management tool. So my installation scripts should read the credentials from a .kdbx (Keepass 2) database file before starting the actual installation.
Till now i wrote a basic python script for reading the .kdbx file. The script outputs a json object via:
print json.dumps(inventory, sort_keys=False)
The ouput looks like the following:
{"cdc":
{"cdc_test_server":
{"cdc_test_user":
{"username": "cdc_test_user",
"password": "password"}
}
}
}
Now I want to achieve, that the python script is executed by ansible and the key value pairs of the output are included/registered as ansible variables. So far my playbook looks as follows:
- hosts: 127.0.0.1
connection: local
tasks:
- name: "Test Playboook Functionality"
command: python /usr/local/test.py
register: pass
- debug: var=pass.stdout
- name: "Include json user output"
set_fact: passwords="{{pass.stdout | from_json}}"
- debug: " {{passwords.cdc.cdc_test_server.cdc_test_user.password}} "
The first debug generates the correct json output, but i am not able to include the variables in ansible, so that I can use them via jinja2 notation. set_fact doesn't throw an exception, but the last debug just returns a "Hello world" - message? So my question is: How do I properly include the json key value pairs as ansible variables via task?
See Ansible KeePass Lookup Plugin
ansible_user : "{{ lookup('keepass', 'path/to/entry', 'username') }}"
ansible_become_pass: "{{ lookup('keepass', 'path/to/entry', 'password') }}"
You may want to use facts.d and place your python script there to be available as a fact.
Or write a simple action plugin that returns json object to eliminate the need in stdout->from_json conversion.
Late to the party, but it seems your use case is primarily covered by keepass-inventory. And it doesn't require any playbook "magic". Disclaimer: I contribute to this non-profit.
export KDB_PATH=example.kdbx
export KDB_PASS=example
ansible all --list-hosts -i keepass-inventory.py
I'm using ansible to configure and deploy several servers in ec2. Since these servers are frequently changing I'd like to use dynamic inventory. I have set up ec2.py and ec2.ini in my jenkins server (this is where the ansible scripts are run) but am running into an issue when I run the playbook:
ERROR! Specified --limit does not match any hosts
Which clearly means that my hosts are not being selected correctly. When I run:
./ec2.py --list >> aws_example.json
everything looks good in aws_example.json.
I'm trying to select servers based on two tags, Name and environment. For example, I have a server with a 'Name' tag of 'api' and an 'environment' tag of 'production'.
I've set up the destination_format_tags like so:
destination_format_tags = Name,environment
and run ansible as follows:
ansible-playbook site.yml -i ec2.py -l api
I've also tried changing the hostname_variable:
hostname_variable = tag_Name.tag_environment
and running the command like so:
ansible-playbook site.yml -i ec2.py -l api.production
Additionally, I've also tried using only one tag with the hostname_variable:
hostname_variable = tag_Name
and running the command like so:
ansible-playbook site.yml -i ec2.py -l api
None of these configurations work. I'm also unable to find much documentation about these setting so I'm not sure how to correctly configure it. Can anyone point me in the right direction?
So the problem was how I was representing my host names in my playbook. Setting the hostname variable was the right thing to do:
hostname_variable = tag_Name
And here's how to represent it in the playbook:
- name: configure and deploy api servers
hosts: tag_Name_api
remote_user: ec2-user
sudo: true
roles:
- java
- nginx
- api
Additionally, it'll need to be called like so:
ansible-playbook site.yml -i ec2.py -l tag_Name_api
Make sure to change special characters such as . or - to _.
I am using Ansible to deploy to Amazon EC2, and I have ec2.py and ec2.ini set up such that I can retrieve a list of servers from Amazon. I have my server at AWS tagged rvmdocker:production, and ansible all --list returns my tag as ec2_tag_rvmdocker_production. I can also run:
ansible -m ping tag_rvmdocker_production`
and it works. But if I have that tag in a static inventory file, and run:
ansible all -m ping -i production
it returns:
tag_rvmdocker_production | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! SSH encountered an unknown error during the connection. Werecommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue",
"unreachable": true
}
Here is my production inventory file:
[dockerservers]
tag_rvmdocker_production
It looks like Ansible can't resolve tag_rvmdocker_production when it's in the static inventory file.
UPDATE
I followed ydaetskcoR's advice and am now getting a new error message:
$ ansible-playbook -i production app.yml
ERROR! ERROR! production:2: Section [dockerservers:children] includes undefined group: tag_rvmdocker_production
But I know the tag exists, and it seems like Ansible and ec2.py know it:
$ ansible tag_rvmdocker_production --list
hosts (1):
12.34.56.78
Here is my production inventory:
[dockerservers:children]
tag_rvmdocker_production
And my app.yml playbook file:
---
- name: Deploy RVM app to production
hosts: dockerservers
remote_user: ec2-user
become: true
roles:
- ec2
- myapp
In the end, I'd love to be able to run the same playbook against development (a VM on my Mac), staging, or production, to start an environment. My thought was to have static inventory files that pointed to tags or groups on EC2. Am I even approaching this the right way?
I had a similar issue to this, and resolved it as follows.
First, I created a folder to contain my inventory files, and put in there a symlink to my /etc/ec2.ini, a copy (or symlink) to the ec2.py script (with executable status), and a hosts file as follows.
$ ls amg-dev/*
amg-dev/ec2.ini -> /etc/ec2.ini
amg-dev/ec2.py
amg-dev/hosts
My EC2 instances are tagged with a Type = amg_dev_web
The hosts file contains the following information - the blank first entry is important here.
[tag_Type_amg_dev_web]
[webservers:children]
tag_Type_amg_dev_web
[all:children]
webservers
Then when I run ansible-playbook I specify the name of the folder only as the inventory which makes Ansible read the hosts file, and execute the ec2.py script to interrogate AWS.
ansible-playbook -i amg-dev/ playbook.yml
Inside my playbook, I refer to these as webservers as follows
- name: WEB | Install and configure relevant packages
hosts: webservers
roles:
- common
- web
Which seems to work as expected.
As discussed in the comments, it looks like you've misunderstood the use of tags in a dynamic inventory.
The AWS EC2 dynamic inventory script allows you to target groups of servers by a tag key/value combination. So to target your web servers you may have a tag called Role that in this case is set to web which you would then target as a dynamic group with tag_Role_web.
You can also have static groups that contain children dynamic groups. This is much the same as how you use groups of groups normally in an inventory file that might be used like this:
[web-servers:children]
front-end-web-servers
php-web-servers
[front-end-web-servers]
www-web-1
www-web-2
[php-web-servers]
php-web-1
php-web-2
Which would allow you to generically target or set group variables for all of the web servers above simply by using the more generic web-servers group and then specifically configure the types of web servers using the more specific groups of either front-end-web-servers or php-web-servers.
However, if you put an entry under a group where it isn't defined as a child group then Ansible will assume that this is a host and will then attempt to connect to that host directly.
If you have a uniquely tagged instance that you are trying to reach via dynamic inventory then you simply use it as if it was a group (it just happens to currently only have one instance in it).
So if you want to target or set variables for the dockerservers group which then includes an instance that is tagged with the key-pair combination of rvmdocker: production then you would just do this:
[dockerservers:children]
tag_rvmdocker_production
[tag_rvmdocker_production]
How can I launch(purchase) a reserved EC2 instance using Ansible with EC2 module? I've googled using words something like 'ec2 reserved instance ansible' but no joy.
Or should I use AWS CLI instead?
Or you can create Ansible module.
Also there are already created modules that you can use as examples ansible-modules-extras/cloud/amazon.
PS:
Modules can be written in any language and are found in the path
specified by ANSIBLE_LIBRARY or the --module-path command line option.
By default, everything that ships with ansible is pulled from its
source tree, but additional paths can be added.
The directory ”./library”, alongside your top level playbooks, is also
automatically added as a search directory.
I just made a PR which might help you.
You could use it as follows:
- name: Purchase reserved instances
boto3:
name: ec2
region: us-east-1
operation: purchase_reserved_instances_offering
parameters:
ReservedInstancesOfferingId: 9a06095a-bdc6-47fe-a94a-2a382f016040
InstanceCount: 3
LimitPrice:
Amount: 123.0
CurrencyCode: USD
register: result
- debug: var=result
If you're interrested by this feature, feel free to vote up on the PR. :)
I looked into the Cloud module list and found there isn't any modules out of the box that supports reserved instance - I think you try building a wrapper over the AWS CLI or Python Boto SDK [ or any SDK ].
This is the pseudo code for the playbook :
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: 'Calling Python Code to reserve instance'
raw: python reserve-ec2-instance.py args