Ansible copy ssh public key from file, use in uri call - regex

I need to copy the SSH public key from a local file, then use it in a uri task in my playbook.
Keep in mind, I cannot use "authorized_key" module as this is a system I must use the API to configure public keys for users.
Code below keeps failing, I am 100% sure its because of the filter I am using. I am including the commented out section that does work for the body.
Trying to use a lookup with a regex_search, I used [^\s]\s[^\s] which works in python. Also the key is in a different directory in my local host (../../ssh/ssh_key/key.pub)
Any ideas?
- name: copy public key to gitea
hosts: localhost
tasks:
- name: include user to add as variable
include_vars:
file: users.yaml
name: users
- name: Gather users key contents and create variable
# shell: "cat ../keys/ssh_keys/zz123z.pub | awk '{print $1 FS $2}'"
shell: "cat ../keys/ssh_keys/{{item.username}}.pub | awk '{print $1 FS $2}'"
register: key
with_items:
- "{{users.user}}"
- name: Add user's key to gitea
uri:
url: https://10.10.10.10/api/v1/admin/users/{{ item.username }}/keys
headers:
Authorization: "token {{ users.GiteaApiToken }}"
validate_certs: no
return_content: yes
status_code: 201
method: POST
body: "{\"key\": \"{{ key.stdout }}\", \"read_only\": true, \"title\": \"{{ item.username }} shared
body_format: json
with_items:
- "{{users.user}}"
This is the error I receive when using -vvv
TASK [Add user's key to gitea] *************************************************
task path: /home/dave/projects/Infrastructure/ansible/AddTempUsers/addusers.yaml:275
Wednesday 04 March 2020 18:14:29 -0500 (0:00:00.537) 0:00:01.991 *******
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/home/dave/projects/Infrastructure/ansible/AddTempUsers/addusers.yaml': line 275, column 13, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add user's key to gitea\n ^ here\n"
}

I FIGURED IT OUT!
used shell with an awk command to gather the keys. (Note: including an awk for RSA keys, and one for id_ed25519, which we use. RSA is commented out but others can comment if they wish to use.)
Used loop control to iterate through the results.
Code below:
- name: copy public key to gitea
hosts: localhost
tasks:
- name: include user to add as variable
include_vars:
file: users.yaml
name: users
- name: Gather users key contents and create variable
# For RSA Keys
# shell: "cat ../keys/ssh_keys/{{item.username}}.pub | awk '/-END PUBLIC KEY-/ { p = 0 }; p; /-BEGIN PUBLIC KEY-/ { p = 1 }'
# For id_ed5519 Keys
shell: "cat ../keys/ssh_keys/{{item.username}}.pub | awk '{print $1 FS $2}'"
register: key
with_items:
- "{{users.user}}"
- name: Add user's key to gitea
uri:
url: https://10.10.10.10/api/v1/admin/users/{{ item.username }}/keys
headers:
Authorization: "token {{ users.GiteaApiToken }}"
validate_certs: no
return_content: yes
status_code: 201
method: POST
body: "{\"key\": \"{{ key.results[ndx].stdout }}\", \"read_only\": true, \"title\": \"{{ item.username }} shared VM\"}"
body_format: json
with_items:
- "{{users.user}}"
loop_control:
index_var: ndx

Related

Looking to see if a key from one set of imported variables matches another so it's value can be procured

I have roughly formatted yml files with key/value pairs in them. I then imported the values of both of these files successfully into a running playbook using the include_vars module.
Now, I want to be able to compare the value of the key/value pair from file/list 1, to all of the keys of file/list 2. Then finally when there is a match, print and preferably save/register the value of the matching key from file/list 2.
Essentially I am comparing a machine name to an IP list to try to grab the IP the machine needs out of that list. The name is "dynamic" and is different each time the playbook is run, as file/list 1 is always dynamically populated on each run.
Examples:
file/list 1 contents
machine_serial: m60
s_iteration: a
site_name: dud
t_number: '001'
file/list 2 contents
m51: 10.2.5.201
m52: 10.2.5.202
m53: 10.2.5.203
m54: 10.2.5.204
m55: 10.2.5.205
m56: 10.2.5.206
m57: 10.2.5.207
m58: 10.2.5.208
m59: 10.2.5.209
m60: 10.2.5.210
m61: 10.2.5.211
In a nutshell, I want to be able to get the file/list 1 ct_machine_serial key who's value is currently: m60 to be able to find it's key match in file/list 2, and then print and/or preferably register it's value of 10.2.5.210.
What I've tried so far:
Playbook:
- name: IP gleaning comparison.
hosts: localhost
remote_user: ansible
become: yes
become_method: sudo
vars:
ansible_ssh_pipelining: yes
tasks:
- name: Try to do a variable import of the file1 file.
include_vars:
file: ~/active_ct-scanner-vars.yml
name: ctfile1_vars
become: no
- name: Try to do an import of file2 file for lookup comparison to get an IP match.
include_vars:
file: ~/machine-ip-conversion.yml
name: ip_vars
become: no
- name: Best, but failing attempt to get the value of the match-up IP.
debug:
msg: "{{ item }}"
when: ctfile1_vars.machine_serial == ip_vars
with_items:
- "{{ ip_vars }}"
Every task except the final one works perfectly.
My failed output final task:
TASK [Best, but failing attempt to get the value of the match-up IP.] ***********************************************************************************
skipping: [localhost] => (item={'m51': '10.200.5.201', 'm52': '10.200.5.202', 'm53': '10.200.5.203', 'm54': '10.200.5.204', 'm55': '10.200.5.205', 'm56': '10.200.5.206', 'm57': '10.200.5.207', 'm58': '10.200.5.208', 'm59': '10.200.5.209', 'm60': '10.200.5.210', 'm61': '10.200.5.211'})
skipping: [localhost]
What I hoped for hasn't happened, it simply skips the task, and doesn't iterate over the list like I was hoping, so there must be a problem somewhere. Hopefully there is an easy solution to this I just missed. What could be the correct answer?
Given the files
shell> cat active_ct-scanner-vars.yml
machine_serial: m60
s_iteration: a
site_name: dud
t_number: '001'
shell> cat machine-ip-conversion.yml
m58: 10.2.5.208
m59: 10.2.5.209
m60: 10.2.5.210
m61: 10.2.5.211
Read the files
- include_vars:
file: active_ct-scanner-vars.yml
name: ctfile1_vars
- include_vars:
file: machine-ip-conversion.yml
name: ip_vars
Q: "Compare the machine name to an IP list and grab the IP."
A: Both variables ip_vars and ctfile1_vars are dictionaries. Use ctfile1_vars.machine_serial as index in ip_vars
match_up_IP: "{{ ip_vars[ctfile1_vars.machine_serial] }}"
gives
match_up_IP: 10.2.5.210
Example of a complete playbook for testing
- hosts: localhost
gather_facts: false
vars:
match_up_IP: "{{ ip_vars[ctfile1_vars.machine_serial] }}"
tasks:
- include_vars:
file: active_ct-scanner-vars.yml
name: ctfile1_vars
- include_vars:
file: machine-ip-conversion.yml
name: ip_vars
- debug:
var: match_up_IP

How can I use regex in when condition

On an Ansible playbook, I'm trying to execute a shell command only if a service exist on the remote server.
I have 3 tasks :
service_facts
execution of shell command if tomcat is installed
display the output of the shell command if tomcat is installed
Here is my code :
- name: Get Infos
hosts: all
gather_facts: yes
become: false
remote_user: [MY_USER]
tasks:
- name: Get the list of services
service_facts:
- name: Get version of Tomcat if installed
become: true
shell: 'java -cp /opt/tomcat/lib/catalina.jar org.apache.catalina.util.ServerInfo | grep "Server version"'
register: tomcat_version
when: "'tomcat.service' in services"
- debug: msg="{{ tomcat_version.stdout_lines }}"
when: "'tomcat.service' in services"
The problem is on certains servers the service name is, for example, tomcat-8.1
How can i use regex in the when condition?
I tried regex(), regex_search(), either I'm doing it wrong or I don't know how to do it.
Have you any idea how to do it?
Thanks in advance!
Count matching items. For example
- service_facts:
- block:
- shell: smartctl --version | head -1
register: smart_version
- debug:
msg: "{{ smart_version.stdout_lines }}"
when: _srvcs|length > 0
vars:
_regex: '.*smart.*'
_srvcs: "{{ services|select('match', _regex) }}"
gives
msg:
- smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-73-generic] (local build)
The next option is to intersect the list of services, e.g.
when: _srvcs|length > 0
vars:
my_services:
- smartmontools.service
- smart-8.1
- smart-devel.0.0.1
_srvcs: "{{ my_services|intersect(services) }}"
Debug
Q: "It gives me a failure on the server where my service doesn't exist, cause the playbook still tries to execute the shell. Is it normal?"
A: No. It is not normal. Print debug and find out why the condition evaluates to true, i.e. what service(s) match either the regex or the list. For example
- debug:
msg: |
_srvcs:
{{ _srvcs|to_nice_yaml|indent(2) }}
when: debug|d(false)|bool
vars:
my_services:
- smartmontools.service
- smart-8.1
- smart-devel.0.0.1
_srvcs: "{{ my_services|intersect(services) }}"
gives
msg: |-
_srvcs:
- smartmontools.service
To enable the task run the playbook with the option -e debug=true.

Ansible lineinfile regexp to manage /etc/exports

I've been hitting a wall trying to get /etc/exports managed via Ansible.
I've got a role that installs a piece of software on a VM, and I want to then add an entry ot /etc/exports on the NFS server, for that specific VM, so it's able to access the NFS shares needed.
Lineinfile sounds like the way to go, but sofar I can't figure out how to properly write this.
I want this to:
not modify if the host is in the line, no matter where
add the NFS share and the host if there's no line for the NFS share
add the host to the share in case it isn't in there.
The latest installment of my 'add to /etc/exports' that thought should work, but doesn't, is:
- name: Add hosts to mountpoint line
ansible.builtin.lineinfile:
path: /etc/exports
line: '\1 {{ host_ip }}(root_squash,no_subtree_check)'
regex: '^((?!{{ volume_mountpoint }}.*{{ host_ip }}\(root_squash,no_subtree_check\).*).*)$'
backrefs: yes
but i'm still getting all kinds of weird side effects. I've used backreferences etc before, but somehow this one keeps tripping me up.
Anyone who sees what's gong wrong?
Typical /etc/exports entry:
/srv/files 172.16.0.14(rw,no_root_squash,no_subtree_check)
It's not possible in one step to modify a line using backreferences or add the line if missing. To modify the existing mount point the back-references are needed. For example, given the files for testing
shell> cat etc/export1
/srv/files 172.16.0.14(rw,no_root_squash,no_subtree_check)
shell> cat etc/export2
/srv/files 172.16.0.15(rw,no_root_squash,no_subtree_check)
shell> cat etc/export3
/srv/download 172.16.0.14(rw,no_root_squash,no_subtree_check)
the task
tasks:
- lineinfile:
path: "etc/{{ item }}"
regex: '^{{ mount }}(\s+)({{ ipr }})*({{ optionsr }})*(\s*)(.*)$'
line: '{{ mount }}\g<1>{{ ip }}{{ options }} \g<5>'
backrefs: true
vars:
mount: /srv/files
ipr: '172\.16\.0\.14'
ip: '172.16.0.14'
optionsr: '\(.*?\)'
options: '(root_squash,no_subtree_check)'
loop:
- export1
- export2
- export3
gives
--- before: etc/export1 (content)
+++ after: etc/export1 (content)
## -1 +1 ##
-/srv/files 172.16.0.14(rw,no_root_squash,no_subtree_check)
+/srv/files 172.16.0.14(root_squash,no_subtree_check)
changed: [localhost] => (item=export1)
--- before: etc/export2 (content)
+++ after: etc/export2 (content)
## -1 +1 ##
-/srv/files 172.16.0.15(rw,no_root_squash,no_subtree_check)
+/srv/files 172.16.0.14(root_squash,no_subtree_check) 172.16.0.15(rw,no_root_squash,no_subtree_check)
changed: [localhost] => (item=export2)
ok: [localhost] => (item=export3)
The first two files are all right. The problem is the third file. The line hasn't been added to the file. Quoting from backrefs
"... if the regexp does not match anywhere in the file, the file will be left unchanged."
The explanation is simple. There are no groups if the regex doesn't match. If there are no groups the line can't be created.
On the other hand, quoting from regexp
"... If the regular expression is not matched, the line will be added to the file ..."
As a result, it's not possible to ask lineinfile to add a line if the regexp does not match and, at the same time, to do nothing if the regexp is matched. If the regexp is matched you need back-references. If you use back-references you can't add a missing line.
To solve this problem read the content of the files and create a dictionary
- command: "cat etc/{{ item }}"
register: result
loop: [export1, export2, export3]
- set_fact:
content: "{{ dict(_files|zip(_lines)) }}"
vars:
_lines: "{{ result.results|map(attribute='stdout_lines')|list }}"
_files: "{{ result.results|map(attribute='item')|list }}"
gives
content:
export1:
- /srv/files 172.16.0.14(rw,no_root_squash,no_subtree_check)
export2:
- /srv/files 172.16.0.15(rw,no_root_squash,no_subtree_check)
export3:
- /srv/download 172.16.0.14(rw,no_root_squash,no_subtree_check)
Now add the line only if missing, i.e. do not replace the line if the mount point is already there
- lineinfile:
path: "etc/{{ item }}"
line: '{{ mount }} {{ ip }}{{ options }}'
vars:
mount: /srv/files
ip: '172.16.0.14'
options: '(root_squash,no_subtree_check)'
loop: "{{ content|list }}"
when: content[item]|select('search', mount)|length == 0
gives
skipping: [localhost] => (item=export1)
skipping: [localhost] => (item=export2)
--- before: etc/export3 (content)
+++ after: etc/export3 (content)
## -1 +1,2 ##
/srv/download 172.16.0.14(rw,no_root_squash,no_subtree_check)
+/srv/files 172.16.0.14(root_squash,no_subtree_check)

ansible playbook : Error to connect to a windows host (winrm) using aws secret manager

i try to connect my ansible host to a windows server using winrm
my ansible version :
ansible 2.10.8
config file = None
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ec2-user/.local/lib/python3.6/site-packages/ansible
executable location = /home/ec2-user/.local/bin/ansible
python version = 3.6.8 (default, Dec 5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
it work when the password is in vars
- hosts: win
vars:
ansible_user: "ansible"
ansible_password: "Itismypassword"
and it is not working with this configuration :
- hosts: win
vars:
ansible_user: "ansible"
ansible_password: "{{ lookup('amazon.aws.aws_secret', 'ansible_password', bypath=true) | regex_search ('ansible_password\\\":\\\"(.*)\\\"','\\1')}}"
i retrieve the password ( i used a regex to get only the password, i'm not sure that is the correct way to do that)
when i want to use it i get this error:
fatal: [win_server]: FAILED! => {
"msg": "Invalid type for configuration option plugin_type: connection plugin: winrm setting: remote_password : Invalid type provided for \"string\": ['Itismypassword']"
}
thanks for your help !
The error message is pretty clear: list[str] != str; while I don't have an AWS Secret Manager to test against, thankfully the error message makes knowing the fix pretty obvious: grab the | first of the resulting list
ansible_password: "{{
lookup('amazon.aws.aws_secret', 'ansible_password', bypath=true)
| regex_search ('ansible_password\\\":\\\"(.*)\\\"','\\1')
| first }}"
As for your aside, the answer is for sure not to use regex to extract those values; based on the regex you're using, I'm guessing the output of that aws_secret lookup is in fact JSON, and thus the correct behavior is to | from_json to turn it back into a dict (or list[dict]), then extract the key you want
something like this, but without knowing the actual shape this is likely not 100% correct:
ansible_password: >-
{{ lookup('amazon.aws.aws_secret', 'ansible_password', bypath=true)
| from_json | map(attribute='ansible_password') | first }}
thanks to mdaniel i progress in my understanding of ansible and aws secret manager: it's working with regex but i try to figure out how to do it in a proper way...
when i get secret from aws secret manager with this commands :
debug: msg="{{ lookup('amazon.aws.aws_secret', 'ansible_secret', bypath=true) | dict2items }} "
i have this answer :
"msg": [
{
"key": "ansible_secret",
"value": "{\"password\":\"itismypassword\",\"user\":\"ansible_user\"}"
}
]
i don't know how to get "itismypassword" from value field >< a jquery maybe ?

ansible parse text string from stdout

My problem is with ansible and parsing stdout. I need to capture the stdout from an ansible play and parse this output for a specific substring within stdout and save into a var. My specific use case is below
- shell: "vault.sh --keystore EAP_HOME/vault/vault.keystore |
--keystore-password vault22 --alias vault --vault-block |
vb --attribute password --sec-attr 0penS3sam3 --enc-dir |
EAP_HOME/vault/ --iteration 120 --salt 1234abcd"
register: results
become: true
This generates an output with the following line, the goal is to capture the masked key that jboss vault generates and save that in an ansible var so I can use it to configure the standalone.xml template:
vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"/>
I need a way parse this string with possibly regex and save the "MASK-5dOaAVafCSd" substring into an ansible var using set_facts module or any other ansible module.
Currently my code looks like this
#example stdout
results: vault-option name=\"KEYSTORE_PASSWORD\" value=\"MASK-5dOaAVafCSd\"/>
- name: JBOSS_VAULT:define keystore password masked value variable
set_fact:
masked_value: |
"{{ results.stdout |
regex_replace('^.+(MASK-.+?)\\.+','\\\1') }}"
This code is defining masked_value as the results.stdout, not the expected capture group.
You are very close. I advice you to use regex101.com to test regular expressions.
Here is my solution:
---
- hosts: localhost
gather_facts: no
tasks:
- shell: echo 'vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"'
register: results
- set_fact:
myvalue: "{{ results.stdout | regex_search(regexp,'\\1') }}"
vars:
regexp: 'value=\"([^"]+)'
- debug:
var: myvalue
result:
ok: [localhost] => {
"myvalue": [
"MASK-5dOaAVafCSd"
]
}
Update:
regex_search returns a list of found matches, so to get only first one use:
{{ results.stdout | regex_search(regexp,'\\1') | first }}
The above solution worked for me, however I had to do some extra logic to filter shell command output to get to the line which contains following
<vault-option name="KEYSTORE_PASSWORD" value="MASK-6qcNdkIprlA"/>
because vault command output has many lines in it. Once this line is captured, the solution given by Konstantin works just fine. Below is the whole thing that needs to done in one place.
- name: Creating jboss vault
shell: |
{{ baseDir }}/bin/vault.sh -e {{ vaultDir }} -k {{ keystoreURL }} -p {{ keystorePassword }} \
-s {{ keystoreSalt }} -i {{ iterationCount }} -v {{ keystoreAlias }} -b {{ vaultBlock }} \
-a {{ attributeName }} -x {{ attributeValue }}
register: vaultResult
- set_fact:
jbossKeystorePassword: "{{ item | regex_search('value=\"([^\"]+)','\\1') | first }}"
when: item | trim | match('.*KEYSTORE_PASSWORD.*')
with_items:
- "{{ vaultResult.stdout_lines }}"
- debug:
var: jbossKeystorePassword
Be sure to replace all variables with your values in above vault.sh command.