Limit hosts using Workflow Template - ansible-tower

I'm using Ansible AWX (Tower) and have a template workflow that executes several templates one after the other, based on if the previous execution was successful.
I noticed I can limit to a specific host when running a single template, I'd like to apply this to the a workflow and my guess is I would have to use the survey option to achieve this, however I'm not sure how.
I have tried to see if I can override the "hosts" value and that failed like I expected it to.
How can I go about having it ask me at the beginning of the workflow for the hostname/ip and not for every single template inside the workflow?

You have the set_stats option.
Let's suppose you have the following inventory:
10.100.1.1
10.100.1.3
10.100.1.6
Your inventory is called MyOfficeInventory. First rule is that you need this inventory across all your Templates to play with the host from the first one.
I want to ping only my 10.100.1.6 machine, so in the Template I choose MyOfficeInventory and limit to 10.100.1.6.
If we do:
---
- name: Ping
hosts: all
gather_facts: False
connection: local
tasks:
- name: Ping
ping:
We get:
TASK [Ping] ********************************************************************
ok: [10.100.10.6]
Cool! So from MyOfficeInventory I have my only host selected pinged. So now, in my workflow I have the next Template with *MyOfficeInventory** selected (This is the rule as said). If I ping, I will ping all of them unless you limit again so let's do the magic:
In your first Template do:
- name: add devices with connectivity to the "working_hosts" group
group_by:
key: working_hosts
- name: "Artifact URL of test results to Tower Workflows"
set_stats:
data:
myinventory: "{{ groups['working_hosts'] }}"
run_once: True
Be careful, because for your playbook,
groups['all']
means:
"groups['all']": [
"10.100.10.1",
"10.100.10.3",
"10.100.10.6"
]
And with your new working_hosts group, you get only your current host:
"groups['working_hosts']": [
"10.100.10.6"
]
So now you have your brand new myinventory inventory.
Use it like this in the rest of your Playbooks assigned to your Templates:
- name: Ping
hosts: "{{ myinventory }}"
gather_facts: False
tasks:
- name: Ping
ping:
Your inventory variable will be transferred and you will get:
ok: [10.100.10.6]
One step further. Do you want to select your host from a Survey?
Create one with your hostname input and add keep your first Playbook as:
- name: Ping
hosts: "{{ mysurveyhost }}"
gather_facts: False

Related

How to create a file or template owned by a user that does not exist on the host with ansible?

I'm experimenting with podman rootless.
Users in containers get assigned a subuid / subgid space from the host.
Files created or updated from a user in the container environment belong to the user id space,
that doesn't exist on the host.
That's where I'm currently stuck. I can calculate the subuid with ansible and ease access to the container owned files with ACL, but I can't get ansible to write out a jinja template and chown it to a user that doesn't exist on the host.
I also don't want to workaround by creating a dummy user with a matching UID on the host, since that would probably undermine the security advantages / the rootless concept.
Here the task:
- name: copy hass main config to storage
become: yes
template:
src: configuration.yaml.j2
dest: "{{ hass_data_dir }}/configuration.yaml"
owner: "{{ stat_container_base_dir }}.uid"
group: "{{ stat_container_base_dir }}.gid"
mode: 0640
and the error message when running the task.
TASK [server/smarthome/homeassistant/podman : copy hass main config to storage] ************************************************************************************************************************
fatal: [odroid]: FAILED! =>
changed: false
checksum: 20c59b4a12d4ebe52a3dd191a80a5091d8e6dc0c
gid: 0
group: root
mode: '0640'
msg: 'chown failed: failed to look up user {''changed'': False, ''stat'': {''exists'':
True, ''path'': ''/home/homeassistant/container'', ''mode'': ''0770'', ''isdir'':
True, ''ischr'': False, ''isblk'': False, ''isreg'': False, ''isfifo'': False,
''islnk'': False, ''issock'': False, ''uid'': 363147, ''gid'': 362143, ''size'':
4096, ''inode'': 4328211, ''dev'': 45826, ''nlink'': 3, ''atime'': 1669416005.068732,
I tried to find help in the modules documentation at: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html
My ansible version is: ansible [core 2.13.1]
As you can see in the error message, ansible is missing a user with UID 363147 on the host.
Is there any way to circumvent the test if a user exists in ansible.builtin.template and similar modules, that allow user assignment with owner: and group:?
The only workaround I found was using command, but with the need for templates, complexity will increase when I'd have to parse jinja templates without the ansible template module.
I would appreciate if I missed an existing option or would like to create a pull request for an option like:
ignore_usercheck: true or validate_user: false
Hope you can help me out here :)
After all this was only a misleading error message, not a missing feature in Ansible.
I tested with the debug module and found out, that the values of stat have to be accessed from inside the curly brackets.
- name: debug
debug:
msg: "{{ stat_container_base_dir.stat.uid }}"
What Ansible got, was the whole string content of stat, not just the UID.
User ID's that don't exist on the host can be assigned.

How do you set key/value secret in AWS secrets manager using Ansible?

The following code does not set the key/value pair for secrets. It only creates a string. But I want to create key/value and the documentation does not even mention it....
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Add string to AWS Secrets Manager
aws_secret:
name: 'testvar'
state: present
secret_type: 'string'
secret: "i love devops"
register: secret_facts
- debug:
var: secret_facts
IF this matches anything like the Secrets Manager CLI then to set key values pairs you should expect to create a key value pair like the below:
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Add string to AWS Secrets Manager
aws_secret:
name: 'testvar'
state: present
secret_type: 'string'
secret: "{\"username\":\"bob\",\"password\":\"abc123xyz456\"}"
register: secret_facts
- debug:
var: secret_facts
While the answer here is not "wrong", it will not work if you need to use variables to build your secrets. The reason is when the string gets handed off to Jinja2 to handle the variables there is some variable juggling that goes on which ends in the double quotes being replaced by single quotes no matter what you do!
So the example above done with variables:
secret: "{\"username\":\"{{ myusername }}\",\"password\":\"{{ mypassword }}\"}"
Ends up as:
{'username:'bob','password':'abc123xyz456'}
And of course AWS fails to parse it. The solution is ridiculously simple and I found it here: https://stackoverflow.com/a/32014283/896690
If you put a space or a new line at the start of the string then it's fine!
secret: " {\"username\":\"{{ myusername }}\",\"password\":\"{{ mypassword }}\"}"

How to get dynamic shell variable with Ansible playbook and jinja2 templates

I need to call a shell script that will return the private ip of an ec2 in an Ansible task.
Once I get the IP in a variable private_ip_var I want to inject that variable in a jinja2 template to generate a config file.
Here's what I'm thinking:
- hosts: all
vars:
inline_variable: 'hello again'
tasks:
- name: Gets the IP of the ec2 instance
command: get_ec2_private_ip.sh <----- shell script to dynamically get the ip of ec2
register: private_ip_var` <------ saving shell return value to this var
tasks:
- name: Inject that private_ip_var into the jinja template
template:
src: src=config.cfg.j2
dest: config.cfg
config.cfg.j2
blah blah
The ip of the ec2 is: {{ private_ip_var }} <------------ THIS IS WHAT I WANT TO ACHIEVE
Variable given as inline - {{ inline_variable }} <------------- DONT CARE ABOUT THIS VAR
output - config.cfg
------
blah blah
The ip of the ec2 is: 10-251-50-12 <----------------- THIS IS WHAT I WANT
Variable given as inline - hello again <---------------- DONT CARE ABOUT THIS VAR
I don't care about inline_variable above; I only care about private_ip_var; how can I achieve this with Ansible so that i can generate that config file from a jinja2 template?
To get AWS EC2 information, you will need to install the following on your (Linux) host:
boto3
botocore
Python >= 2.6
Then use the module ec2_instance_info. Here is an example:
ec2_instance_info:
instance_ids:
- i-23456789
Then you can filter on ip address information. You will most likely be interested in the following parameters:
primary (true / false)
Indicates whether this IPv4 address is the primary private IP address of the network interface.
For more examples, check out: https://docs.ansible.com/ansible/latest/modules/ec2_instance_info_module.html#examples

Parsing variables in ansible inventory in python

I'm trying to parse ansible variables using python specified in an inventory file like below:
[webservers]
foo.example.com type=news
bar.example.com type=sports
[dbservers]
mongodb.local type=mongo region=us
mysql.local type=mysql region=eu
I want to be able to parse type=news for host foo.example.com in webservers and type=mongo region=us for host mongodb.local under dbservers. Any help with this is greatly appreciated
The play below
- name: List type=news hosts in the group webservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['webservers'] }}"
when: hostvars[item].type == "news"
- name: List type=mongo and region=us hosts in the group dbservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['dbservers'] }}"
when:
- hostvars[item].type == "mongo"
- hostvars[item].region == "us"
gives:
"msg": "foo.example.com"
"msg": "mongodb.local"
If the playbook will be run on the host:
foo.example.com
you can get "type = news" simply by specifying "{{type}}". If you want to use in "when" conditions, then simply indicating "type"
If the playbook will be run on the host:
mongodb.local
then the value for "type" in this case will automatically be = "mongo", and "region" will automatically be = "us"
The values of the variables, if they are defined in the hosts file as you specified, will automatically be determined on the specified hosts
Thus, the playbook can be executed on all hosts and if you get a value for "type", for example:
- debug:
     msg: "{{type}}"
On each of the hosts you will get your unique values that are defined in the hosts file
I'm not sure that I understood the question correctly, but if it meant that on the foo.example.com host it was necessary to get a list of servers from the "webservers" group that have "type = news", then the answer is already given.
Rather than re-inventing the wheel, I suggest you have a look at how ansible itsef is parsing ini files to turn them into an inventory object
You could also easily get this info in json format with a very simple playbook (as suggested by #vladimirbotka), or rewrite your inventory in yaml which would be much easier to parse with any external tool
inventory.yaml
---
all:
children:
webservers:
hosts:
foo.example.com:
type: news
bar.example.com:
type: sports
dbservers:
hosts:
mongodb.local:
type: mongo
region: us
mysql.local:
type: mysql
region: eu

Ansible returns wrong hosts in dynamic inventory (private ip collision?)

I have two instances on different VPCs which have the same private address.
ci-vpc:
172.18.50.180:
tags:
Environment: ci
Role: aRole
test-vpc:
172.18.50.180:
tags:
Environment: test
Role: web
I am running the following playbook:
- name: "print account specific variables"
hosts: "tag_Environment_ci:&tag_Role_web"
tasks:
- name: "print account specific variables for account {{ account }}"
debug:
msg:
- 'ec2_tag_Name': "{{ ec2_tag_Name }}"
'ec2_tag_Role': "{{ ec2_tag_Role }}"
'ec2_private_ip_address': "{{ ec2_private_ip_address }}"
'ec2_tag_Environment': "{{ ec2_tag_Environment }}"
Since I am asking for both role web and environment ci, none of these instances should be picked, but nevertheless the result that I am getting is:
ok: [172.18.50.180] => {
"changed": false,
"msg": [
{
"ec2_private_ip_address": "172.18.50.180",
"ec2_tag_Environment": "test",
"ec2_tag_Name": "test-web-1",
"ec2_tag_Role": "web"
}
]
}
Obviously this instance does not meet the requirements under hosts...
It seems like ec2.py searched for the Environment tag, found ci for 172.18.50.180, then searched separately for the role tag, found another one under 172.18.50.180, and just marked that instance as ok, even though these are two different instances on different vpcs.
I've tried changing vpc_destination_variable in ec2.ini to id but then I'm getting error when Ansible is trying to connect to these instances because it cannot connect to the id...
fatal: [i-XXX]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname i-XXX: Name or service not known\r\n", "unreachable": true
}
Is there another option that will work under vpc_destination_variable? Any known solution for such a collision?
tl;dr: This is exactly what hostname_variable in ec2.ini is for, as documented:
# This allows you to override the inventory_name with an ec2 variable, instead
# of using the destination_variable above. Addressing (aka ansible_ssh_host)
# will still use destination_variable. Tags should be written as 'tag_TAGNAME'.
Unfortunetely I've missed it and found it after looking around in ec2.py
Longer answer with additional options to hostnames
After finding out about hostname_variable, I had another problem that it can receive only one variable. In my case I had some instances with the same private ip on one hand, and some with the same tags on the other (AWS autoscaling groups, same tags on all hosts), so I needed a way to differentiate between them.
I've created a gist with this option. My change is in line 848. This allows you to use multiple comma separated variables in hostname_variable, e.g.:
hostname_variable = tag_Name,private_ip_address