I have a file on my local machine that I want to upload to a remote server, it contains confidential information that I don't want exposed in my VCS. It also has some text I need to replace dynamically in it (at the moment as Jinja2 placeholders "{{}}").
If I use the copy module, then the file is un-vaulted when I upload it, but obviously the placeholders are note replaced.
If i use the template module, then it doesn't un-vault the file and hence it is uploaded in its encrypted format (and also doesn't replace placeholders because they are obfuscated by the encryption).
How can I both template and un-vault a file (using ansible) to the remote server?
As already mentioned in the comments, you could set your secrets in variables and render them into the templates during provision, but if for some reason you want to keep your whole template a secret, there are some workarounds to also do that.
Handling encrypted templates
As a workaround you could temporarily decrypt the template locally and after the rollout delete the decrypted file with the local_action module.
Lets assume your encrypted template resides as template.enc in your roles templates directory.
---
- name: Decrypt template
local_action: "shell {{ view_encrypted_file_cmd }} {{ role_path }}/templates/template.enc > {{ role_path }}/templates/template"
changed_when: False
- name: Deploy template
template:
src=templates/template
dest=/home/user/file
- name: Remove decrypted template
local_action: "file path={{ role_path }}/templates/template state=absent"
changed_when: False
Please note the changed_when: False. This is important in order to run idempotence tests with your ansible roles - otherwise each time you run the playbook a change is signaled.
In group_vars/all.yml you could set a global decrypt command for reuse, e.g., as view_encrypted_file_cmd.
group_vars/all.yml
---
view_encrypted_file_cmd: "ansible-vault --vault-password-file {{ lookup('env', 'ANSIBLE_VAULT_PASSWORD_FILE') }} view"
Handling encrypted static files
One way: as template
You could set the content of your secret static file (e.g., a private key) as a variable in ansible and provision it as a template.
var.yml
---
my_private_key: |
YOUR KEY
asfdlsafkj
asdlkfjasf
templates/private_key.j2
{{ private_key }}
tasks/main.yml
---
template:
src=templates/private_key.j2
dest=/home/user/.ssh/id_rsa
vars:
private_key: "{{ my_private_key }}"
Another way: via lookup pipe
Another way would be to use the lookup module with pipe to set the content property within the copy module - that way you do not need an extra variable.
---
- copy:
dest=/your/dest
content=lookup('pipe', 'VAULT_PASSWORD_FILE=path/to/pass_file ansible-vault view path/to/file.enc')
Now Ansible 2.4 supports decrypt option on copy module: http://docs.ansible.com/ansible/latest/copy_module.html#options
This shouldn't be used any more, see the comment below...
There is another possibility similar to the solution by fishi in case of a static file. By using copy instead of template there is no need for an additional file.
Using vars.yml:
Store in a vault-encrypted vars.yml:
encrypted_content: |
foo = {{ bar }}
password = abcabc
...
Code for the task:
- name: Save encrypted template
copy:
content: "{{ encrypted_content }}"
dest: /path/to/destination
Using a separate YAML file
You can also store the encrypted template code in another YAML file. This is useful, wenn vars.yml shall not be encrypted. For example vars/encrypted.yml might be:
encrypted_content: |
foo = {{ bar }}
password = abcabc
...
Code for the task:
- name: Read encrypted variable file
include_vars: encrypted.yml
no_log: true
- name: Save encrypted template
copy:
content: "{{ encrypted_content }}"
dest: /path/to/destination
In short, use copy module and ansible-vault.
Here is the complete example for copying a local encrypted file named hello.vault to hello.txt on remote server. Its clear content is WORLD and the encryption key is 1234.
Create your vault file hello.vault:
$ ansible-vault create hello.vault
New Vault password: 1234
Confirm New Vault password: 1234
## Then input your secret and exit the editor ##
WORLD
$ cat hello.vault
$ANSIBLE_VAULT;1.1;AES256
39653932393834613339393036613931393636663638636331323034653036326237373061666139
6434373635373065613135633866333733356532616635640a663739306639326535336637616138
39666462343737653030346463326464333937333161306561333062663164313162376564663262
3533393839633466300a666661303363383265613736376564623465613165656531366331366664
6436
Create your password file, e.g. vault.key as follow
1234
Use copy module to transfer the vault file to clear text on webserver(defined in inventory).
ansible webserver -i inventory --vault-password-file=vault.key \
-m copy -a "src=hello.vault dest=hello.txt"
ansible webserver -i inventory -m command -a "cat hello.txt"
WORLD
Related
I'm experimenting with podman rootless.
Users in containers get assigned a subuid / subgid space from the host.
Files created or updated from a user in the container environment belong to the user id space,
that doesn't exist on the host.
That's where I'm currently stuck. I can calculate the subuid with ansible and ease access to the container owned files with ACL, but I can't get ansible to write out a jinja template and chown it to a user that doesn't exist on the host.
I also don't want to workaround by creating a dummy user with a matching UID on the host, since that would probably undermine the security advantages / the rootless concept.
Here the task:
- name: copy hass main config to storage
become: yes
template:
src: configuration.yaml.j2
dest: "{{ hass_data_dir }}/configuration.yaml"
owner: "{{ stat_container_base_dir }}.uid"
group: "{{ stat_container_base_dir }}.gid"
mode: 0640
and the error message when running the task.
TASK [server/smarthome/homeassistant/podman : copy hass main config to storage] ************************************************************************************************************************
fatal: [odroid]: FAILED! =>
changed: false
checksum: 20c59b4a12d4ebe52a3dd191a80a5091d8e6dc0c
gid: 0
group: root
mode: '0640'
msg: 'chown failed: failed to look up user {''changed'': False, ''stat'': {''exists'':
True, ''path'': ''/home/homeassistant/container'', ''mode'': ''0770'', ''isdir'':
True, ''ischr'': False, ''isblk'': False, ''isreg'': False, ''isfifo'': False,
''islnk'': False, ''issock'': False, ''uid'': 363147, ''gid'': 362143, ''size'':
4096, ''inode'': 4328211, ''dev'': 45826, ''nlink'': 3, ''atime'': 1669416005.068732,
I tried to find help in the modules documentation at: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html
My ansible version is: ansible [core 2.13.1]
As you can see in the error message, ansible is missing a user with UID 363147 on the host.
Is there any way to circumvent the test if a user exists in ansible.builtin.template and similar modules, that allow user assignment with owner: and group:?
The only workaround I found was using command, but with the need for templates, complexity will increase when I'd have to parse jinja templates without the ansible template module.
I would appreciate if I missed an existing option or would like to create a pull request for an option like:
ignore_usercheck: true or validate_user: false
Hope you can help me out here :)
After all this was only a misleading error message, not a missing feature in Ansible.
I tested with the debug module and found out, that the values of stat have to be accessed from inside the curly brackets.
- name: debug
debug:
msg: "{{ stat_container_base_dir.stat.uid }}"
What Ansible got, was the whole string content of stat, not just the UID.
User ID's that don't exist on the host can be assigned.
I need to call a shell script that will return the private ip of an ec2 in an Ansible task.
Once I get the IP in a variable private_ip_var I want to inject that variable in a jinja2 template to generate a config file.
Here's what I'm thinking:
- hosts: all
vars:
inline_variable: 'hello again'
tasks:
- name: Gets the IP of the ec2 instance
command: get_ec2_private_ip.sh <----- shell script to dynamically get the ip of ec2
register: private_ip_var` <------ saving shell return value to this var
tasks:
- name: Inject that private_ip_var into the jinja template
template:
src: src=config.cfg.j2
dest: config.cfg
config.cfg.j2
blah blah
The ip of the ec2 is: {{ private_ip_var }} <------------ THIS IS WHAT I WANT TO ACHIEVE
Variable given as inline - {{ inline_variable }} <------------- DONT CARE ABOUT THIS VAR
output - config.cfg
------
blah blah
The ip of the ec2 is: 10-251-50-12 <----------------- THIS IS WHAT I WANT
Variable given as inline - hello again <---------------- DONT CARE ABOUT THIS VAR
I don't care about inline_variable above; I only care about private_ip_var; how can I achieve this with Ansible so that i can generate that config file from a jinja2 template?
To get AWS EC2 information, you will need to install the following on your (Linux) host:
boto3
botocore
Python >= 2.6
Then use the module ec2_instance_info. Here is an example:
ec2_instance_info:
instance_ids:
- i-23456789
Then you can filter on ip address information. You will most likely be interested in the following parameters:
primary (true / false)
Indicates whether this IPv4 address is the primary private IP address of the network interface.
For more examples, check out: https://docs.ansible.com/ansible/latest/modules/ec2_instance_info_module.html#examples
I'm trying to parse ansible variables using python specified in an inventory file like below:
[webservers]
foo.example.com type=news
bar.example.com type=sports
[dbservers]
mongodb.local type=mongo region=us
mysql.local type=mysql region=eu
I want to be able to parse type=news for host foo.example.com in webservers and type=mongo region=us for host mongodb.local under dbservers. Any help with this is greatly appreciated
The play below
- name: List type=news hosts in the group webservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['webservers'] }}"
when: hostvars[item].type == "news"
- name: List type=mongo and region=us hosts in the group dbservers
debug:
msg: "{{ hostvars[item].inventory_hostname }}"
loop: "{{ groups['dbservers'] }}"
when:
- hostvars[item].type == "mongo"
- hostvars[item].region == "us"
gives:
"msg": "foo.example.com"
"msg": "mongodb.local"
If the playbook will be run on the host:
foo.example.com
you can get "type = news" simply by specifying "{{type}}". If you want to use in "when" conditions, then simply indicating "type"
If the playbook will be run on the host:
mongodb.local
then the value for "type" in this case will automatically be = "mongo", and "region" will automatically be = "us"
The values of the variables, if they are defined in the hosts file as you specified, will automatically be determined on the specified hosts
Thus, the playbook can be executed on all hosts and if you get a value for "type", for example:
- debug:
msg: "{{type}}"
On each of the hosts you will get your unique values that are defined in the hosts file
I'm not sure that I understood the question correctly, but if it meant that on the foo.example.com host it was necessary to get a list of servers from the "webservers" group that have "type = news", then the answer is already given.
Rather than re-inventing the wheel, I suggest you have a look at how ansible itsef is parsing ini files to turn them into an inventory object
You could also easily get this info in json format with a very simple playbook (as suggested by #vladimirbotka), or rewrite your inventory in yaml which would be much easier to parse with any external tool
inventory.yaml
---
all:
children:
webservers:
hosts:
foo.example.com:
type: news
bar.example.com:
type: sports
dbservers:
hosts:
mongodb.local:
type: mongo
region: us
mysql.local:
type: mysql
region: eu
I'm using Ansible AWX (Tower) and have a template workflow that executes several templates one after the other, based on if the previous execution was successful.
I noticed I can limit to a specific host when running a single template, I'd like to apply this to the a workflow and my guess is I would have to use the survey option to achieve this, however I'm not sure how.
I have tried to see if I can override the "hosts" value and that failed like I expected it to.
How can I go about having it ask me at the beginning of the workflow for the hostname/ip and not for every single template inside the workflow?
You have the set_stats option.
Let's suppose you have the following inventory:
10.100.1.1
10.100.1.3
10.100.1.6
Your inventory is called MyOfficeInventory. First rule is that you need this inventory across all your Templates to play with the host from the first one.
I want to ping only my 10.100.1.6 machine, so in the Template I choose MyOfficeInventory and limit to 10.100.1.6.
If we do:
---
- name: Ping
hosts: all
gather_facts: False
connection: local
tasks:
- name: Ping
ping:
We get:
TASK [Ping] ********************************************************************
ok: [10.100.10.6]
Cool! So from MyOfficeInventory I have my only host selected pinged. So now, in my workflow I have the next Template with *MyOfficeInventory** selected (This is the rule as said). If I ping, I will ping all of them unless you limit again so let's do the magic:
In your first Template do:
- name: add devices with connectivity to the "working_hosts" group
group_by:
key: working_hosts
- name: "Artifact URL of test results to Tower Workflows"
set_stats:
data:
myinventory: "{{ groups['working_hosts'] }}"
run_once: True
Be careful, because for your playbook,
groups['all']
means:
"groups['all']": [
"10.100.10.1",
"10.100.10.3",
"10.100.10.6"
]
And with your new working_hosts group, you get only your current host:
"groups['working_hosts']": [
"10.100.10.6"
]
So now you have your brand new myinventory inventory.
Use it like this in the rest of your Playbooks assigned to your Templates:
- name: Ping
hosts: "{{ myinventory }}"
gather_facts: False
tasks:
- name: Ping
ping:
Your inventory variable will be transferred and you will get:
ok: [10.100.10.6]
One step further. Do you want to select your host from a Survey?
Create one with your hostname input and add keep your first Playbook as:
- name: Ping
hosts: "{{ mysurveyhost }}"
gather_facts: False
I have two instances on different VPCs which have the same private address.
ci-vpc:
172.18.50.180:
tags:
Environment: ci
Role: aRole
test-vpc:
172.18.50.180:
tags:
Environment: test
Role: web
I am running the following playbook:
- name: "print account specific variables"
hosts: "tag_Environment_ci:&tag_Role_web"
tasks:
- name: "print account specific variables for account {{ account }}"
debug:
msg:
- 'ec2_tag_Name': "{{ ec2_tag_Name }}"
'ec2_tag_Role': "{{ ec2_tag_Role }}"
'ec2_private_ip_address': "{{ ec2_private_ip_address }}"
'ec2_tag_Environment': "{{ ec2_tag_Environment }}"
Since I am asking for both role web and environment ci, none of these instances should be picked, but nevertheless the result that I am getting is:
ok: [172.18.50.180] => {
"changed": false,
"msg": [
{
"ec2_private_ip_address": "172.18.50.180",
"ec2_tag_Environment": "test",
"ec2_tag_Name": "test-web-1",
"ec2_tag_Role": "web"
}
]
}
Obviously this instance does not meet the requirements under hosts...
It seems like ec2.py searched for the Environment tag, found ci for 172.18.50.180, then searched separately for the role tag, found another one under 172.18.50.180, and just marked that instance as ok, even though these are two different instances on different vpcs.
I've tried changing vpc_destination_variable in ec2.ini to id but then I'm getting error when Ansible is trying to connect to these instances because it cannot connect to the id...
fatal: [i-XXX]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname i-XXX: Name or service not known\r\n", "unreachable": true
}
Is there another option that will work under vpc_destination_variable? Any known solution for such a collision?
tl;dr: This is exactly what hostname_variable in ec2.ini is for, as documented:
# This allows you to override the inventory_name with an ec2 variable, instead
# of using the destination_variable above. Addressing (aka ansible_ssh_host)
# will still use destination_variable. Tags should be written as 'tag_TAGNAME'.
Unfortunetely I've missed it and found it after looking around in ec2.py
Longer answer with additional options to hostnames
After finding out about hostname_variable, I had another problem that it can receive only one variable. In my case I had some instances with the same private ip on one hand, and some with the same tags on the other (AWS autoscaling groups, same tags on all hosts), so I needed a way to differentiate between them.
I've created a gist with this option. My change is in line 848. This allows you to use multiple comma separated variables in hostname_variable, e.g.:
hostname_variable = tag_Name,private_ip_address