I'm using ansible to configure and deploy several servers in ec2. Since these servers are frequently changing I'd like to use dynamic inventory. I have set up ec2.py and ec2.ini in my jenkins server (this is where the ansible scripts are run) but am running into an issue when I run the playbook:
ERROR! Specified --limit does not match any hosts
Which clearly means that my hosts are not being selected correctly. When I run:
./ec2.py --list >> aws_example.json
everything looks good in aws_example.json.
I'm trying to select servers based on two tags, Name and environment. For example, I have a server with a 'Name' tag of 'api' and an 'environment' tag of 'production'.
I've set up the destination_format_tags like so:
destination_format_tags = Name,environment
and run ansible as follows:
ansible-playbook site.yml -i ec2.py -l api
I've also tried changing the hostname_variable:
hostname_variable = tag_Name.tag_environment
and running the command like so:
ansible-playbook site.yml -i ec2.py -l api.production
Additionally, I've also tried using only one tag with the hostname_variable:
hostname_variable = tag_Name
and running the command like so:
ansible-playbook site.yml -i ec2.py -l api
None of these configurations work. I'm also unable to find much documentation about these setting so I'm not sure how to correctly configure it. Can anyone point me in the right direction?
So the problem was how I was representing my host names in my playbook. Setting the hostname variable was the right thing to do:
hostname_variable = tag_Name
And here's how to represent it in the playbook:
- name: configure and deploy api servers
hosts: tag_Name_api
remote_user: ec2-user
sudo: true
roles:
- java
- nginx
- api
Additionally, it'll need to be called like so:
ansible-playbook site.yml -i ec2.py -l tag_Name_api
Make sure to change special characters such as . or - to _.
Related
I'm trying to run an ansible playbook from Github repo using AWS Systems Manager. Basically, I'm running the Ansible playbook from the AWS Systems Manager Console --> Run command --> AWS-ApplyAnsiblePlaybooks --> Specify the Github repo location --> Choose the target instances --> Run.
The actual Ansible command running behind the scene is in the following format:
ansible-playbook -i localhost -c local -e <extra variables> <verbose> <playbookfile>
My repo has a hosts (ini format) file as shown below:
[dev]
server.example.com
And my playbook looks like below:
---
- name: test run
hosts: dev
become: true
When I run the playbook, I get the errors below:
PLAY [test run] ********************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: dev
It works fine if I change 'hosts: all' in the playbook instead of the group name 'dev'. But, I just only want to run against a group.
Any idea why it is not picking up the hosts? Can someone help me to resolve this issue, please?
I want to find a way to not have to specify aws_access_key and aws_secret_key when use aws modules.
Is that aws default try to use credentials in ~/.aws to run against playbooks?
If yes, how to instruct ansible to use aws credentials under whatever folder you want, e.g: ~/my_ansible_folder.
I ask this because I really want to use ansible to create a vault: cd ~/my_ansible_folder; ansible-vault create aws_keys.yml under ~/my_ansible_folder then run playbook ansible-playbook -i ./inventory --ask-vault-pass site.yml that will use aws credential in the vault that I don't have to specify aws_access_key and aws_secret_key in tasks.. that need to use aws credentials.
The list of boto3 configuration options will interest you, most most notably the $AWS_SHARED_CREDENTIALS_FILE environment variable.
I would expect you can create that shared credentials file using a traditional copy: content="[default]\naws_access_key_id=whatever\netc\netc\n" and then set the ansible_python_interpreter fact to be env AWS_SHARED_CREDENTIALS_FILE=/path/to/that/credential-file /the/original/ansible_python_interpreter to cause the actual python invocation to carry that environment variable with it. For non-boto modules, doing that will just cost you running env as well as python, but to be honest the bizarre module serialization and deserialization that ansible does anyway will cause that extra binary runtime to be invisible in the scheme of things.
You may have to override $AWS_CONFIG_FILE and $BOTO_CONFIG in the same manner, even pointing them at /dev/null in order to force boto to not look in your $HOME/.aws directory
So, for clarity:
- name: create our boto config
copy:
content: |
[default]
aws_access_key_id={{ access_key_from_vault }}
aws_secret_access_key={{ secret_key_from_vault }}
dest: /somewhere/sekrit
mode: '0600'
no_log: yes
register: my_aws_config
- name: grab existing python interp
set_fact:
backup_a_py_i: '{{ ansible_python_interpreter | default(ansible_playbook_python) }}'
- name: patch in our env-vars
set_fact:
ansible_python_interpreter: >-
env AWS_SHARED_CREDENTIALS_FILE={{ my_aws_config.path }}
{{ backup_a_py_i }}
# and away you go!
- ec2_instance_facts:
# optionally put this in a "rescue:" or whatever you think is reasonable
- file: path={{ my_aws_config.path }} state=absent
I'm trying to run an ad-hoc ansible command on hosts that have been tagged as Name = foo-bar (notice the hyphen). When I run:
ansible tag_Name_foo_bar -i ec2.py -m ping
I get: No hosts matched
However, there is such a host. If I run the same command against a host that is tagged with a name not containing a hyphen, it works fine, e.g for a host that is tagged Name = foobar, the following works:
ansible tag_Name_foobar -i ec2.py -m ping
H
What is your ansible version? It works for me. According to Example: AWS EC2 External Inventory Script
Tags
Each instance can have a variety of key/value pairs associated with it
called Tags. The most common tag key is ‘Name’, though anything is
possible. Each key/value pair is its own group of instances, again
with special characters converted to underscores, in the format
tag_KEY_VALUE e.g. tag_Name_Web can be used as is
tag_Name_redis-master-001 becomes tag_Name_redis_master_001
tag_aws_cloudformation_logical-id_WebServerGroup becomes
tag_aws_cloudformation_logical_id_WebServerGroup
It is possible ansible's ec2 cache is not refreshed. Try:
ec2.py --refresh-cache
and then run your ansible command again. When I changed my instance tag name to foo_bar, ot worked correctly.
I'm writing a playbook to validate our Cloud Formation stacks (port 80 open, httpd.conf has correct settings, instance type is correct, etc). The one thing that is tripping me up is how to validate EC2 tags.
key=Name, value=testec2
I've tried the below and changed the when condition multiple different ways.
- name: Check Name Tag
action: debug msg="Name Tag Exists."
when: "ec2_tag_Name"
[Examples tried]
when: "tag_Name_testec2"
when: " ec2_tag_Name_testec2"
when: "ec2_tag_Name"
I've actually tried quite a few more varieties but those are the ones I can easily remember off the top of my head.
when i run "ec2.py --list" it outputs multiple formats of the tag
"ec2_tag_Name": "testec2",
"tag_Name_testec2": [
Any suggestions would be greatly appreciated.
I use tag_Name_testec2 but this is a group in hostsvars. Is not a common variable. To avoid troubles, first change in your ec2.ini, the cache max age, from 20 to 1:
cache_max_age = 1
and see if you have some filter like region or public or private ip.
You could debug you hostvars with this way:
[batman#myhost myproject]$ ansible -i ec2.py tag_Name_webserver -u ec2-user -m debug -a msg="{{ hostvars[inventory_hostname]['ec2_id'] }}" -vvv
Using /etc/ansible/ansible.cfg as config file
10.78.17.117 | SUCCESS => {
"msg": "i-b34cb736"
}
In case anyone is interested, I finally figured it out. Feel free to point and laugh for not noticing "is defined" missing.
name: Check Name Tag Types
action: debug msg="Name tag exists."
when: "ec2_tag_Name is defined"
I am using Ansible to deploy to Amazon EC2, and I have ec2.py and ec2.ini set up such that I can retrieve a list of servers from Amazon. I have my server at AWS tagged rvmdocker:production, and ansible all --list returns my tag as ec2_tag_rvmdocker_production. I can also run:
ansible -m ping tag_rvmdocker_production`
and it works. But if I have that tag in a static inventory file, and run:
ansible all -m ping -i production
it returns:
tag_rvmdocker_production | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! SSH encountered an unknown error during the connection. Werecommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue",
"unreachable": true
}
Here is my production inventory file:
[dockerservers]
tag_rvmdocker_production
It looks like Ansible can't resolve tag_rvmdocker_production when it's in the static inventory file.
UPDATE
I followed ydaetskcoR's advice and am now getting a new error message:
$ ansible-playbook -i production app.yml
ERROR! ERROR! production:2: Section [dockerservers:children] includes undefined group: tag_rvmdocker_production
But I know the tag exists, and it seems like Ansible and ec2.py know it:
$ ansible tag_rvmdocker_production --list
hosts (1):
12.34.56.78
Here is my production inventory:
[dockerservers:children]
tag_rvmdocker_production
And my app.yml playbook file:
---
- name: Deploy RVM app to production
hosts: dockerservers
remote_user: ec2-user
become: true
roles:
- ec2
- myapp
In the end, I'd love to be able to run the same playbook against development (a VM on my Mac), staging, or production, to start an environment. My thought was to have static inventory files that pointed to tags or groups on EC2. Am I even approaching this the right way?
I had a similar issue to this, and resolved it as follows.
First, I created a folder to contain my inventory files, and put in there a symlink to my /etc/ec2.ini, a copy (or symlink) to the ec2.py script (with executable status), and a hosts file as follows.
$ ls amg-dev/*
amg-dev/ec2.ini -> /etc/ec2.ini
amg-dev/ec2.py
amg-dev/hosts
My EC2 instances are tagged with a Type = amg_dev_web
The hosts file contains the following information - the blank first entry is important here.
[tag_Type_amg_dev_web]
[webservers:children]
tag_Type_amg_dev_web
[all:children]
webservers
Then when I run ansible-playbook I specify the name of the folder only as the inventory which makes Ansible read the hosts file, and execute the ec2.py script to interrogate AWS.
ansible-playbook -i amg-dev/ playbook.yml
Inside my playbook, I refer to these as webservers as follows
- name: WEB | Install and configure relevant packages
hosts: webservers
roles:
- common
- web
Which seems to work as expected.
As discussed in the comments, it looks like you've misunderstood the use of tags in a dynamic inventory.
The AWS EC2 dynamic inventory script allows you to target groups of servers by a tag key/value combination. So to target your web servers you may have a tag called Role that in this case is set to web which you would then target as a dynamic group with tag_Role_web.
You can also have static groups that contain children dynamic groups. This is much the same as how you use groups of groups normally in an inventory file that might be used like this:
[web-servers:children]
front-end-web-servers
php-web-servers
[front-end-web-servers]
www-web-1
www-web-2
[php-web-servers]
php-web-1
php-web-2
Which would allow you to generically target or set group variables for all of the web servers above simply by using the more generic web-servers group and then specifically configure the types of web servers using the more specific groups of either front-end-web-servers or php-web-servers.
However, if you put an entry under a group where it isn't defined as a child group then Ansible will assume that this is a host and will then attempt to connect to that host directly.
If you have a uniquely tagged instance that you are trying to reach via dynamic inventory then you simply use it as if it was a group (it just happens to currently only have one instance in it).
So if you want to target or set variables for the dockerservers group which then includes an instance that is tagged with the key-pair combination of rvmdocker: production then you would just do this:
[dockerservers:children]
tag_rvmdocker_production
[tag_rvmdocker_production]