Saltstack load pillar in a for loop - templates

I am developing a automatic proftd installation whit Salt, i wont to get the ftp users from a template but I cant get work the pillar, i initialized the pillar whit the users data and call it into a for loop, but you don't get the pillar user data in the loop.
When i make salt-call pillar.get ftpusers in the minion, the response is:
local:
This is my pillar ftpusers.sls:
ftp-server.ftpusers:
user:
- user: user
- passhash: j2k3hk134123l1234ljh!"ยท$ser
- uuid: 1001
- guid: 1001
- home: /srv/ftp/user
- shel: /bin/false
And this is the for loop:
{% for users in pillar.get('ftpusers', {}).items() %}
/srv/herma-ftp/.ftpusers:
file.managed:
- user: root
- group: root
- mode: 444
- contents:'{{ user }}:{{ args['passhash'] }}:{{args['uuid'] }}:{{ args['guid'] }}::{{ args['home'] }}:{{ args['shel'] }}'
- require:
- file: /srv/herma-ftp
/srv/herma-ftp/{{user}}:
file.directory:
- user: nobody
- group: nobody
- dir_mode: 775
- makedirs: True
- require:
- file: /srv/herma-ftp
- watch:
- file: /srv/herma-ftp
module.run:
- name: file.set_selinux_context
- path: {{ args['home']}}
- type: public_content_t
- unless:
- stat -c %C {{ args['home'] }} |grep -q public_content_t
{% endfor %}
When I make in the minion
salt-call -l debug state.sls herma-ftp-server saltenv=My-enviroment test=True
Don't expect this for because don't can get the pillar data.

Your loop should also look like:
{% for user, args in pillar.get('ftpusers', {}).items() %}
Also, contents argument for a file.managed doesn't support templating. What you need to do is move /srv/herma-ftp/.ftpusers state outside of the loop, and make the loop inside the file template. The final layout of your state should look like:
/srv/herma-ftp/.ftpusers
file.managed:
source: salt://ftpserver/dot.ftpusers
template: jinja
...
...
{% for user, args in pillar.get('ftpusers', {}).items() %}
/srv/herma-ftp/{{user}}:
file.managed:
...
{% endfor %}
And your ftpserver/dot.ftpusers would look like:
{% for user, args in pillar.get('ftpusers', {}).items() %}
{{ user }}:{{ args['passhash'] }}:{{args['uuid'] }}:{{ args['guid'] }}::{{ args['home'] }}:{{ args['shel'] }}
{% endfor %}

Related

Include files in Jinja, applying template then filtering

Summary
I have a Jinja2 template which I'm running with Ansible.
I would like my template to load another file, as a template (i.e. evaluating {{ var }}), then I'll filter that, and then paste the result in to the top level template.
I think I'm almost there, I just need to find a Jinja2 filter which takes in a string and parses it as a template.
MWE
In this example lets assume the filter I want to apply is just to make the file uppercase.
(Obviously this case is so simple I could do it in one template file. But my real use case is more complex.)
Top level template main.yaml.j2:
---
something:
blah:
x: {{ y }}
{%- set names = [ 'John', 'Amy' ] %}
z: >
{{ lookup('file', './other-file.j2') | upper | indent(4*2) }}
other-file.j2:
{%- for name in names %}
Hello {{ name }}
{%- endfor %}
Running it with this Ansible playbook:
---
- hosts: localhost
connection: local
tasks:
- name: generate template
template:
src: "main.yaml.j2"
dest: "output.yaml.j2"
trim_blocks: False
register: templating
vars:
y: 5
Desired output
---
something:
blah:
x: 5
z: >
HELLO JOHN
HELLO AMY
Actual Output
---
something:
blah:
x: 5
z: >
{%- FOR NAME IN NAMES %}
HELLO {{ NAME }}
{%- ENDFOR %}
Best Guess
I think I'm almost there.
I just need a filter which applies a Jinja2 template to text.
i.e. something like:
{{ lookup('file', './other-file.j2') | template | upper | indent(4*2) }}
(But template is not a real filter. Maybe there's another name?)
What else I've tried
{{ include './other-file.j2' | upper | indent(4*2) }}
doesn't work.
fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "AnsibleError: template error while templating string: expected token 'end of print statement', got 'string'. String: ---\nsomething:\n blah:\n x: {{ y }}\n {%- set names = [ 'John', 'Amy' ] %}\n z: >\n {{ include './other-file.j2' | upper | indent(4*2) }}"}
{% include './other-file.j2' | upper | indent(4*2) %}
"TemplateNotFound: ./OTHER-FILE.J2"
doesn't work.
Use Case
For context, my use case is that I have a Jinja2 template generating AWS CloudFormation templates.
I'm trying to do it all in YAML, not JSON.
(Because YAML can have comments, and you don't have to worry about whether the last item in a list has a trailing comma, and it's generally easier to read and write and debug.)
Some CloudFormation resources need literal JSON pasted into the YAML file. (e.g. CloudWatch Dashboard bodies).
So I want to have another file in YAML, which Jinja2 converts to json, and pastes into my overall YAML template.
I want this dashboard to be generated with a for loop, and to pass in variables.
I would like to have a separate
Instead of file plugin
lookup('file', './other-file.j2')
use template plugin
lookup('template', './other-file.j2')
Note that the scope of the variable {% set names = ['John', 'Amy'] %} is the template main.yaml.j2. If this variable is used in the template other-file.j2 the command lookup('template', './other-file.j2') will crash with the error:
"AnsibleUndefinedVariable: 'names' is undefined"
Solution
Declare the variable in the scope of the playbook. For example
- template:
src: "main.j2"
dest: "output.txt"
vars:
names: ['John', 'Amy']
main.j2
{{ lookup('template', './other-file.j2') }}
other-file.j2
{% for name in names %}
Hello {{ name }}
{% endfor %}
give
shell> cat output.txt
Hello John
Hello Amy

Unable to set true/false as an environment variable's value for Cloud Function

I am writing a Deployment Manager script which creates a Cloud Function and sets some environment variables.
Everything works well apart from the fact that one of my properties/variables is not recognized by the Deployment Manager correctly. I keep on getting an error.
I have a property is-local that I supply from CMD line.
Its value needs to be false/true or I can also live with yes/no.
In the schema file if I specify the property as boolean and supply the value as false/true then the deployment starts and only the Cloud Function component fails with an error. I have specified the error as Error#1 below.
if I specify the property as string and supply the value as false/true then the deployment starts but fails immediately with an error. I have specified the error as Error#2 below.
main.jinja
{% set PROJECT_NAME = env['project'] %}
{% set CODE_BUCKET = properties['code-bucket'] %}
{% set IS_LOCAL = properties['is-local'] %}
resources:
- name: create-cf
type: create_cloud_function.jinja
properties:
name: test-cf
project: {{ PROJECT_NAME }}
region: europe-west1
bucket: {{ CODE_BUCKET }}
runtime: nodejs10
entryPoint: test
topic: test
environmentVariables: { 'CODE_BUCKET': {{ CODE_BUCKET }}, 'IS_LOCAL': {{IS_LOCAL}} }
main.jinja.schema
imports:
- path: create_cloud_function.jinja
required:
- code-bucket
- is-local
properties:
code-bucket:
type: string
description: Name of the code bucket to host the code for Cloud Function.
is-local:
type: boolean
description: Will Cloud Function run locally or in cloud.
create_cloud_function.jinja
{% set codeFolder = properties['name'] %}
{% set environmentVariables = properties['environmentVariables'] %}
resources:
#- type: cloudfunctions.v1.function
- type: gcp-types/cloudfunctions-v1:projects.locations.functions
name: {{ properties['name'] }}
properties:
parent: projects/{{ properties['project'] }}/locations/{{ properties['region'] }}
location: {{ properties['region'] }}
function: {{ properties['name'] }}
sourceArchiveUrl: gs://$(ref.{{ properties['bucket'] }}.name)/{{ codeFolder }}.zip
entryPoint: {{ properties['entryPoint'] }}
runtime: {{properties['runtime']}}
eventTrigger:
resource: $(ref.{{ properties['topic'] }}.name)
eventType: providers/cloud.pubsub/eventTypes/topic.publish
environmentVariables:
{% for key, value in environmentVariables.items() %}
{{ key }} : {{ value }}
{% endfor %}
Deployment Manager CMD
gcloud deployment-manager deployments create setup --template main.jinja --properties code-bucket:something-random-test-code-bucket,is-local:false
Error#1: - when the property type is boolean in schema file
{"ResourceType":"gcp-types/cloudfunctions-v1:projects.locations.functions","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid value at 'function.environment_variables[1].value' (TYPE_STRING), false","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"field":"function.environment_variables[1].value","description":"Invalid value at 'function.environment_variables[1].value' (TYPE_STRING), false"}]}],"statusMessage":"Bad Request","requestPath":"https://cloudfunctions.googleapis.com/v1/projects/someproject/locations/europe-west1/functions","httpMethod":"POST"}}
Error#2: - when the property type is string in schema file
errors:
- code: MANIFEST_EXPANSION_USER_ERROR
location: /deployments/setup/manifests/manifest-1571821997285
message: |-
Manifest expansion encountered the following errors: Invalid properties for 'main.jinja':
True is not of type 'string' at ['is-local']
Resource: main-jinja Resource: config
Any idea whats the issue here...
I'm unfamiliar with jinja but from my understanding, environment variables cannot be anything else but strings.
Said this, reading Error#1 I'd conclude that, effectively, the var type has to be string.
Then, at the second error we can clearly see that you are trying to put a boolean into a string.
So yeah, you have to play with true / false as strings.
You can define set the value as a string within the jinja file itself. See this post for some details and this page that provides different methods you can use.
In your case, you can edit the create_cloud_function.jinja file and change:
environmentVariables:
{% for key, value in environmentVariables.items() %}
{{ key }} : {{ value }}
to:
environmentVariables:
{% for key, value in environmentVariables.items() %}
{{ key }} : {{ value|string }}
Once the manifest is fully expanded, the value should be considered a string for the purpose of the API call to the Cloud Functions API
Eventually what I had 2 do was pass IS_LOCAL: '''false'''from the command line and {{ key }} : {{ value }} in my jinja file.
According to this documentation about Using environment variables in Jinja, you should use the following syntax to add an environment var to your templates:
{{ env["deployment"] }} # Jinja
And they show the following example:
- type: compute.v1.instance
name: vm-{{ env["deployment"] }}
properties:
machineType: zones/us-central1-a/machineTypes/f1-micro
serviceAccounts:
- email: {{ env['project_number'] }}-compute#developer.gserviceaccount.com
scopes:
- ...
Given that you are providing the value of is-local from CMD line, and according to this documentation:
Boolean values are case insensitive, so TRUE, true, and True are treated the same.
AND
To specify multiple properties, provide comma-separated key:value pairs. It does not matter in what order you specify the pairs. For example:
`gcloud deployment-manager deployments create my-igm
--template vm_template.jinja
--properties zone:us-central1-a,machineType:n1-standard-1,image:debian-9`
You should use TRUE, true, or True for is-local param.

Ansible - Print message - debug: msg="line1 \n {{ var2 }} \n line3 with var3 = {{ var3 }}"

In Ansible (1.9.4) or 2.0.0
I ran the following action:
- debug: msg="line1 \n {{ var2 }} \n line3 with var3 = {{ var3 }}"
$ cat roles/setup_jenkins_slave/tasks/main.yml
- debug: msg="Installing swarm slave = {{ slave_name }} at {{ slaves_dir }}/{{ slave_name }}"
tags:
- koba
- debug: msg="1 == Slave properties = fsroot[ {{ slave_fsroot }} ], master[ {{ slave_master }} ], connectingToMasterAs[ {{ slave_user }} ], description[ {{ slave_desc }} ], No.Of.Executors[ {{ slave_execs }} ], LABELs[ {{ slave_labels }} ], mode[ {{ slave_mode }} ]"
tags:
- koba
- debug: msg="print(2 == Slave properties = \n\nfsroot[ {{ slave_fsroot }} ],\n master[ {{ slave_master }} ],\n connectingToMasterAs[ {{ slave_user }} ],\n description[ {{ slave_desc }} ],\n No.Of.Executors[ {{ slave_execs }} ],\n LABELs[ {{ slave_labels }} ],\n mode[ {{ slave_mode }} ])"
tags:
- koba
But this is not printing the variable with new lines (for the 3rd debug action)?
debug module support array, so you can do like this:
debug:
msg:
- "First line"
- "Second line"
The output:
ok: [node1] => {
"msg": [
"First line",
"Second line"
]
}
Or you can use the method from this answer:
In YAML, how do I break a string over multiple lines?
The most convenient way I found to print multi-line text with debug is:
- name: Print several lines of text
vars:
msg: |
This is the first line.
This is the second line with a variable like {{ inventory_hostname }}.
And here could be more...
debug:
msg: "{{ msg.split('\n') }}"
It splits the message up into an array and debug prints each line as a string. The output is:
ok: [example.com] => {
"msg": [
"This is the first line.",
"This is the second line with a variable like example.com",
"And here could be more...",
""
]
}
Thanks to jhutar.
Pause module:
The most convenient and simple way I found to display a message with formatting (ex: new lines, tabs ...) is to use the pause module instead of debug module:
- pause:
seconds: 1
prompt: |
======================
line_1
line_2
======================
You can also include a variable that contains formatting (new lines, tabs...) inside the prompt and it will be displayed as expected:
- name: test
hosts: all
vars:
line3: "\n line_3"
tasks:
- pause:
seconds: 1
prompt: |
/////////////////
line_1
line_2 {{ line3 }}
/////////////////
Tip:
when you want to display an output from a command, and instead of running an extra task to run the command and register the output, you can directly use the pipe lookup inside the prompt and do the job in one shot:
- pause:
seconds: 1
prompt: |
=========================
line_1
{{ lookup('pipe', 'echo "line_2 with \t tab \n line_3 "') }}
line_4
=========================
Extra notes regarding the pause module:
If you have multiple hosts, note that the pause task will run
only once against the first host in the list of hosts.
This means that if the variable you want to display exists only in
part of the hosts and the first host does not contain that variable
then you will get an error.
To avoid such an issue, use {{ hostvars['my_host']['my_var'] }}
instead of {{ my_var }}
Combining pause with when conditional might skip the task! Why?
Because the task will only run once against the first host which
might not conform to the stated when conditions.
To avoid this, don't use conditions that constrain the number of
hosts! As you don't need it either, because you know that the task will
run only once anyway. Also use hostvars stated above to make sure
you get the needed variable whatever the picked up host is.
Example:
Incorrect:
- name: test
hosts: host1,host2
vars:
display_my_var: true
tasks:
- when: inventory_hostname == 'host2'
set_fact:
my_var: "hi there"
- when:
- display_my_var|bool
- inventory_hostname == 'host2'
pause:
seconds: 1
prompt: |
{{ my_var }}
This example will skip the pause task, because it will choose only the first host host1 and then starts to evaluate conditions, when it finds that host1 is not conforming to the second condition it will skip the task.
Correct:
- name: test
hosts: host1,host2
vars:
display_my_var: true
tasks:
- when: inventory_hostname == 'host2'
set_fact:
my_var: "hi there"
- when: display_my_var|bool
pause:
seconds: 1
prompt: |
{{ hostvars['host2']['my_var'] }}
Another example to display messages where the content depends on the host:
- set_fact:
my_var: "hi from {{ inventory_hostname }}"
- pause:
seconds: 1
prompt: |
{% for host in ansible_play_hosts %}
{{ hostvars[host]['my_var'] }}
{% endfor %}
You could use stdout_lines of register variable:
- name: Do something
shell: "ps aux"
register: result
- debug: var=result.stdout_lines
Suppressing the last empty string of apt with [:-1]
---
- name: 'apt: update & upgrade'
apt:
update_cache: yes
cache_valid_time: 3600
upgrade: safe
register: apt
- debug: msg={{ apt.stdout.split('\n')[:-1] }}
The above debug: line results in nice line breaks, due to .split('\n'), and a suppressed last empty string thanks to [:-1]; all of which is Python string manipulation, of course.
"msg": [
"Reading package lists...",
"Building dependency tree...",
"Reading state information...",
"Reading extended state information...",
"Initializing package states...",
"Building tag database...",
"No packages will be installed, upgraded, or removed.",
"0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.",
"Need to get 0 B of archives. After unpacking 0 B will be used.",
"Reading package lists...",
"Building dependency tree...",
"Reading state information...",
"Reading extended state information...",
"Initializing package states...",
"Building tag database..."
]
I dig a bit on #Bruce P answer about piping output through sed, and this is what I came up to :
ansible-playbook [blablabla] | sed 's/\\n/\n/g'
if anyone is interested.
This is discussed here. In short you either need to pipe your output through sed to convert the \n to an actual newline, or you need to write a callback plugin to do this for you.
As a workaround, I used with_items and it kind of worked for me.
- debug: msg="Installing swarm slave = {{ slave_name }} at {{ slaves_dir }}/{{ slave_name }}"
- debug: msg="Slave properties = {{ item.prop }} [ {{ item.value }} ]"
with_items:
- { prop: 'fsroot', value: "{{ slave_fsroot }}" }
- { prop: 'master', value: "{{ slave_master }}" }
- { prop: 'connectingToMasterAs', value: "{{ slave_user }}" }
- { prop: 'description', value: "{{ slave_desc }}" }
- { prop: 'No.Of.Executors', value: "{{ slave_execs }}" }
- { prop: 'LABELs', value: "{{ slave_labels }}" }
- { prop: 'mode', value: "{{ slave_mode }}" }
tags:
- koba
I had similar problem with log file which I wanted to print to console. split("\n") works fine but it adds visible \n to each line so I found nicer way
tasks:
- name: Read recent lines from logfile for service {{ appName }}
shell: tail -n 1000 {{ logFile }}
register: appNameLogFile
- debug:
msg: "This is a stdout lines"
with_items: "{{ appNameLogFile.stdout }}"
It iterates over each line from appNameLogFile and as the side effect prints this line into the console. You can update it to
msg: "This is a stdout lines: {{ item }}"
but in my case it was not needed

How to compared a nested pillar key value in an if statement in jinja2 for saltstack

I am working on a saltstack state with some salt wrapped in jinja2.
When I attempt to compare a value from a pillar using jinja2 it appears argument evaluates to nothing.
If I query the value using salt cli, it returns the expected value.
I expect I am referencing the value incorrectly in the if statement with jinja2.
Here is all the needed info to understand and look at this problem:
Salt Master id is salt-dev
Salt Minion id is on same instance and is salt-dev
Here is the pillar top file:
base:
'salt-dev':
- docker-daemon.docker-daemon
Here is the nested pillar file locate at /srv/pillar/docker-daemon/docker-daemon.sls
docker-daemon:
- action: start
- runlevel: enabled
Here is the output of the salt cli command returning the content of the pillar for the minion salt-dev:
# salt 'salt-dev' pillar.items
salt-dev:
----------
docker-daemon:
|_
----------
action:
start
|_
----------
runlevel:
enabled
Here is the output of the value I am using in the if statement where the value returns nothing with jinja2, but returns as expected here with cli:
# salt 'salt-dev' pillar.get docker-daemon:action
salt-dev:
start
The line of jinja2 that is incorrect is:
{% if salt['pillar.get']('docker-daemon:action') == 'start' %}
It appears: salt['pillar.get']('docker-daemon:action') returns nothing, but from cli as shown above it does return something.
Also if I add a default value, which is used in the event this arg returned nothing it also works.
An example of adding a default value is:
{% if salt['pillar.get']('docker-daemon:action', 'def_value') == 'start' %}
I have shown it in context below:
Here is the state file where the if statements are having the same issue:
{% if ( (grains['osfinger'] == 'Oracle Linux Server-6') and (grains['osarch'] == 'x86_64')) %}
sync_docker-init:
file.managed:
- name: /etc/init.d/docker
- source: salt://docker-daemon/templates/docker-init
- user: root
- group: root
- mode: 755
action_docker-init:
{% if salt['pillar.get']('docker-daemon:action') == 'start' %}
service.running:
{% endif %}
{% if salt['pillar.get']('docker-daemon:action') == 'stop' %}
service.dead:
{% endif %}
- name: docker
- require:
- pkg: install_docker-engine
- watch:
- file: sync_docker-init
{% if salt['pillar.get']('docker-daemon:runlevel') == 'enabled' %}
-- enable: True
{% endif %}
{% if salt['pillar.get']('docker-daemon:runlevel') == 'disabled' %}
-- enable: False
{% endif %}
{% else %}
event.send:
- tag: 'salt/custom/docker-init/failure'
- data: "Management of docker init failed, OS not permitted."
{% endif %}
I am quite new at the moment to salt and jinja2, so this is 101 stuff, but I would appreciate some help, I have found nothing for some hours yet.
I attempted to echo this out and it seemed I just get a blank line
I found the solution.
The pillar file /srv/pillar/docker-daemon/docker-daemon.sls was formed as a list instead of a map.
I changed it to this:
docker-daemon:
action: restart
runlevel: disabled

SaltStack: Use directory as source only if it exists

I'd like to know if there's a way of running a SaltStack state only if a directory source is defined in the master.
Basically, I want to allow fo users to put certain configuration files in their HOME directory. Most of the users won't have anything special, but some of them might (a custom .vimrc, for instance)
What I'd like to do is execute a file.recurse for the user's HOMEdirectory only if that directory exists in master.
As of now, I have the following:
{% for username in pillar['users'] %}
{{ username }}:
user:
- present
- home: /home/{{ username }}
[ . . . ] # (yadda, yadda, yadda)
# ToDo: Is there a way of not running this AT ALL if the users's directory is
# not present?
/home/{{ username }}:
file.recurse:
- user: {{ username }}
- group: ubuntu
- source:
- salt://users/{{ username }}
- require:
- user: {{ username }}
{% endfor %}
What I want to do is what in the code appears as the ToDo: If there's a directory salt://users/{{ username }} in the salt tree (or in the salt master), then execute the file.recurse with that directory as the source for the file.recurse. Otherwise, just skip that state and use with whatever default content of $HOME is has when a user is created (just do nothing, I mean).
I thought adding an onlyif clause to the file.recurse configuration (something like - onlyif: test -e salt://users/{{ username }} would help... but nopes... I also tried to create an empty directory in salt://users/empty_dir/ and pass it as a default (as described here for a file.managed), but that trick doesn't work with file.recurse (at least not yet):
Thank you in advance.
You could try something similar to
{% for username in pillar['users'] %}
{{ username }}:
user:
- present
- home: /home/{{ username }}
[ . . . ] # (yadda, yadda, yadda)
# NB!!!! POTENTIAL PERF BOTTLENECK
{% if 'users/'+username not in salt['cp.list_master_dirs'](prefix='users') %}
/home/{{ username }}:
file.recurse:
- user: {{ username }}
- group: ubuntu
- source:
- salt://users/{{ username }}
- require:
- user: {{ username }}
{% endif %}
{% endfor %}