In my site.yml, I run some common tasks, then include 3 other playbooks.
These 3 playbooks then run a role each.
I do this so I can run a full site.yml install, or I can just run a smaller playbook.
I want to prompt for a username and password in the site.yml, which I can do.
Then in the other 3 playbooks/roles, I want to check if the username and password were already created, if not, then prompt for them.
I do not want to prompt for credentials multiple times.
site.yml
---
- hosts: all
vars_prompt:
- name: "username"
- prompt" "enter username"
- include: 1.yml
1.yml
---
- name: install one
hosts: all
# If username has not been defined - This is
# where I am confused how to check if it was defined in site.yml
vars_prompt:
- name: "userame"
prompt: "enter username"
roles:
- 1role
If I run site.yml, it will get username, then run 1.yml and I don't want it to prompt for username because it was prompted for in site.yml.
If I run just 1.yml, I want it to prompt for the username, as I am not running site.yml in this case.
Is there a way to prompt for credentials from a playbook, then check for them in an included playbook?
You should make a file that ansible writes on host and ansible could then check whether those credentials have been filled in or not yet, or you could use register (probably the best option) and then use when to check (when is like an if statement)
For instance:
site.yml
- name: "username"
- prompt" "enter username"
register: username_check
1.yml
- name: "userame"
prompt: "enter username"
when: username_check is not defined
I hope this helps in some regard. I tried to comment but it wouldn't let me.
Related
I have roughly formatted yml files with key/value pairs in them. I then imported the values of both of these files successfully into a running playbook using the include_vars module.
Now, I want to be able to compare the value of the key/value pair from file/list 1, to all of the keys of file/list 2. Then finally when there is a match, print and preferably save/register the value of the matching key from file/list 2.
Essentially I am comparing a machine name to an IP list to try to grab the IP the machine needs out of that list. The name is "dynamic" and is different each time the playbook is run, as file/list 1 is always dynamically populated on each run.
Examples:
file/list 1 contents
machine_serial: m60
s_iteration: a
site_name: dud
t_number: '001'
file/list 2 contents
m51: 10.2.5.201
m52: 10.2.5.202
m53: 10.2.5.203
m54: 10.2.5.204
m55: 10.2.5.205
m56: 10.2.5.206
m57: 10.2.5.207
m58: 10.2.5.208
m59: 10.2.5.209
m60: 10.2.5.210
m61: 10.2.5.211
In a nutshell, I want to be able to get the file/list 1 ct_machine_serial key who's value is currently: m60 to be able to find it's key match in file/list 2, and then print and/or preferably register it's value of 10.2.5.210.
What I've tried so far:
Playbook:
- name: IP gleaning comparison.
hosts: localhost
remote_user: ansible
become: yes
become_method: sudo
vars:
ansible_ssh_pipelining: yes
tasks:
- name: Try to do a variable import of the file1 file.
include_vars:
file: ~/active_ct-scanner-vars.yml
name: ctfile1_vars
become: no
- name: Try to do an import of file2 file for lookup comparison to get an IP match.
include_vars:
file: ~/machine-ip-conversion.yml
name: ip_vars
become: no
- name: Best, but failing attempt to get the value of the match-up IP.
debug:
msg: "{{ item }}"
when: ctfile1_vars.machine_serial == ip_vars
with_items:
- "{{ ip_vars }}"
Every task except the final one works perfectly.
My failed output final task:
TASK [Best, but failing attempt to get the value of the match-up IP.] ***********************************************************************************
skipping: [localhost] => (item={'m51': '10.200.5.201', 'm52': '10.200.5.202', 'm53': '10.200.5.203', 'm54': '10.200.5.204', 'm55': '10.200.5.205', 'm56': '10.200.5.206', 'm57': '10.200.5.207', 'm58': '10.200.5.208', 'm59': '10.200.5.209', 'm60': '10.200.5.210', 'm61': '10.200.5.211'})
skipping: [localhost]
What I hoped for hasn't happened, it simply skips the task, and doesn't iterate over the list like I was hoping, so there must be a problem somewhere. Hopefully there is an easy solution to this I just missed. What could be the correct answer?
Given the files
shell> cat active_ct-scanner-vars.yml
machine_serial: m60
s_iteration: a
site_name: dud
t_number: '001'
shell> cat machine-ip-conversion.yml
m58: 10.2.5.208
m59: 10.2.5.209
m60: 10.2.5.210
m61: 10.2.5.211
Read the files
- include_vars:
file: active_ct-scanner-vars.yml
name: ctfile1_vars
- include_vars:
file: machine-ip-conversion.yml
name: ip_vars
Q: "Compare the machine name to an IP list and grab the IP."
A: Both variables ip_vars and ctfile1_vars are dictionaries. Use ctfile1_vars.machine_serial as index in ip_vars
match_up_IP: "{{ ip_vars[ctfile1_vars.machine_serial] }}"
gives
match_up_IP: 10.2.5.210
Example of a complete playbook for testing
- hosts: localhost
gather_facts: false
vars:
match_up_IP: "{{ ip_vars[ctfile1_vars.machine_serial] }}"
tasks:
- include_vars:
file: active_ct-scanner-vars.yml
name: ctfile1_vars
- include_vars:
file: machine-ip-conversion.yml
name: ip_vars
- debug:
var: match_up_IP
I am experiencing a strange behavior: when I run role B, it complains role A's code which I can successfully run! I have reproduced this to this minimal example:
$ cat playbooka.yml
- hosts:
- host_a
roles:
- role: rolea
tags:
- taga
- role: roleb
tags:
- tagb
I have tagged two roles because I want to selectively run role A or role B, they consist simple tasks as shown below in this minimal example:
$ cat roles/rolea/tasks/main.yml
- name: Get service_facts
service_facts:
- debug:
msg: '{{ ansible_facts.services["amazon-ssm-agent"]["state"] }}'
- when: ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
meta: end_play
$ cat roles/roleb/tasks/main.yml
- debug:
msg: "I am roleb"
The preview confirms that I can run individual roles as specified by tags:
$ ansible-playbook playbooka.yml -t taga -D -C --list-hosts --list-tasks
playbook: playbooka.yml
play #1 (host_a): host_a TAGS: []
pattern: ['host_a']
hosts (1):
3.11.111.4
tasks:
rolea : Get service_facts TAGS: [taga]
debug TAGS: [taga]
$ ansible-playbook playbooka.yml -t tagb -D -C --list-hosts --list-tasks
playbook: playbooka.yml
play #1 (host_a): host_a TAGS: []
pattern: ['host_a']
hosts (1):
3.11.111.4
tasks:
debug TAGS: [tagb]
I can run role A OK:
$ ansible-playbook playbooka.yml -t taga -D -C
PLAY [host_a] *************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
TASK [rolea : Get service_facts] ******************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
TASK [rolea : debug] ******************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4] => {
"msg": "running"
}
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************
3.11.111.4 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
But when I run role B, it complains the code in role A which I just successfully ran!
$ ansible-playbook playbooka.yml -t tagb -D -C
PLAY [host_a] *************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
ERROR! The conditional check 'ansible_facts.services["amazon-ssm-agent"]["state"] != "running"' failed. The error was: error while evaluating conditional (ansible_facts.services["amazon-ssm-agent"]["state"] != "running"): 'dict object' has no attribute 'services'
The error appears to be in '<path>/roles/rolea/tasks/main.yml': line 9, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- when: ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
^ here
We could be wrong, but this one looks like it might be an issue with
unbalanced quotes. If starting a value with a quote, make sure the
line ends with the same set of quotes. For instance this arbitrary
example:
foo: "bad" "wolf"
Could be written as:
foo: '"bad" "wolf"'
I have two questions:
Why role A's code should be involved at all?
Even it gets involved, ansible_facts has services, and the service is "running" as shown above by running role A.
PS: I am using the latest Ansible 2.10.2 and latest python 3.9.1 locally on a MacOS. The remote python can be either 2.7.12 or 3.5.2 (Ubuntu 16_04). I worked around the problem by testing if the dictionary has the services key:
ansible_facts.services is not defined or ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
but it still surprises me that role B will interpret role A's code and interpreted it incorrectly. Is this a bug that I should report?
From the notes in meta module documentation:
Skipping meta tasks with tags is not supported before Ansible 2.11.
Since you run ansible 2.10, the when condition for your meta task in rolea is always evaluated, whatever tag you use. When you use -t tagb, ansible_facts.services["amazon-ssm-agent"] does not exist as you skipped service_facts, and you then get the error you reported.
You can either:
upgrade to ansible 2.11 (might be a little soon as I write this answer since it is not yet available over pip...)
rewrite your condition so that the meta task skips when the var does not exists e.g.
when:
- ansible_facts.services["amazon-ssm-agent"]["state"] is defined
- ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
The second solution is still a good practice IMO in whatever condition (e.g. share your work with someone running an older version, running accidentally against a host without the agent installed....).
One other possibility in your specific case is to move the service_facts tasks to an other role higher in play order, or in the pre_tasks section of your playbook, and tag it always. In this case the task will always play and the fact will always exists, whatever tag you use.
I have a simple playbook that run Cisco nxos command, which the playbook ran successful.
Would like to know what is the code save all the result into a file regardless how many hosts I have and use Survey to input the filename.
Currently, here is my code:
---
- name: run multiple commands on remote nodes
nxos_command:
commands:
- show clock
- show int status
- show cdp neigh
- show int desc
- show port-channel summ
- show vpc
- show vpc role
Try with code
---
- name: run multiple commands on remote nodes
register: myshell_output
nxos_command:
commands:
- show clock
- show int status
- show cdp neigh
- show int desc
- show port-channel summ
- show vpc
- show vpc role
- name: Saving data to local file
copy:
content: "{{ myshell_output.stdout|join('\n') }}"
dest: "/tmp/hello.txt"
delegate_to: localhost
It give me an error:
FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'stdout'\n\nThe error appears to be in '/tmp/awx_1869_7__9l_9l/project/roles/bcpcommands/tasks/main.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: run multiple commands on remote nodes\n ^ here\n"}
The host normally I limit it at Ansible-Tower LIMIT column.
The ideal output of the file possible to include the hostname and commands that I key in?
Thanks
You probably got the indenting wrong. Try;
---
- hosts: my_host
tasks:
- name: run multiple commands on remote nodes
nxos_command:
commands: "{{ item }}"
loop:
- show clock
- show int status
- show cdp neigh
- show int desc
- show port-channel summ
- show vpc
- show vpc role
register: myshell_output
- debug:
msg: "{{ myshell_output }}"
- name: Saving data to local file and include hostname
copy:
content: "{{ myshell_output.stdout|join('\n') }} hostname: {{ inventory_hostname }}"
dest: "/tmp/hello.txt"
delegate_to: localhost
Edit the hostname.
The debug task must output an 'stdout' message. If that one is not present, then your copy task will fail.
I wrote a task that is responsible for changing supervisor config file. The case is that on some servers we have more than one app running workers, so sometimes more than one path needs to be added to include section of supervisor.conf.
Currently I wrote this task in /roles/supervisor/tasks/main.yml/:
- name: Add apps paths in include section
lineinfile:
dest: /etc/supervisor/supervisord.conf
regex: '^files ='
line: 'files = /etc/supervisor/conf.d/*.conf /home/app/{{ app_name }}/releases/app/shared/supervisor/*.conf /home/dev/{{ app_name2 }}/releases/dev/shared/supervisor/*.conf'
when: ansible_hostname = 'ser-db-10'
notify: restart supervisor
tags: multi_workers
... and added in /roles/supervisor/defaults/main.yml/ this:
app_name: bla
app_name2: blabla
It works, but I don't like the thing that there are two application paths hardcoded in line and maybe I should also add variable in place of ser-db-10.
I am wondering how to rebuild this task to make it more independent.
What I mean is, if there are 4 apps, add 4 paths, if there are 2 apps, add 2 paths.
What is the most efficient way to do this?
As an example of how to put together the parameter line, the play below
- hosts: test_01
vars:
app_name1: A
app_name2: B
my_conf:
test_01:
lines:
- '/etc/*.conf'
- '/etc/{{ app_name1 }}/*.conf'
- '/etc/{{ app_name2 }}/*.conf'
tasks:
- debug:
msg: "files = {{ my_conf[inventory_hostname].lines|join(' ') }}"
gives
"msg": "files = /etc/*.conf /etc/A/*.conf /etc/B/*.conf"
With appropriate dictionary my_conf the task below should do the job
- name: Add apps paths in include section
lineinfile:
dest: /etc/supervisor/supervisord.conf
regex: '^files ='
line: "files = {{ my_conf[inventory_hostname].lines|join(' ') }}"
notify: restart supervisor
tags: multi_workers
(not tested)
---
- hosts: localhost
user: root
tasks:
- command: "ls /root/Tmp/Deployment/script_files/Hotfix"
register: dir_out
- debug: msg="The hotfix ids are: {{dir_out.stdout_lines}}"
The output I got was:
but I want it as
The hotfix ids are :["1001","1002"]
How do I do this?
I needed to change: {{dir_out.stdout_lines}} to {{dir_out.stdout_lines|join(',')}}