I'm new to Ansible. I have many router config templates to generate but want to specify in the playbook which specific template should be generated. I'm showing only two template files here to keep things simple. I'm able to generate configs based on my setup, all works well. However, I don't know how to execute a single template file within roles in the site.yaml playbook. Here's my directory structure:
├── roles
│ ├── router
│ ├── tasks
│ │ └── main.yaml
│ ├── templates
│ │ ├── 4331-router.j2
│ │ ├── 881-router.j2
│ │ └── base.j2
│ └── vars
│ └── main.yaml
│
└── site.yaml
Here's how site.yaml playbook is constructed:
---
- name: Generate Router Configuration Files
hosts: localhost
roles:
- router
Here's the main.yaml in the tasks folder:
---
- name: Generate 4331 configuration files
template: src=4331-router.j2 dest=~/ansible/configs/{{item.hostname}}.txt
with_items: "{{ routers_4331 }}"
- name: Generate 881 configuration files
template: src=881-router.j2 dest=~/ansible/configs/{{item.hostname}}.txt
with_items: "{{ routers_881 }}"
When I run the playbook it generates all config templates. I want to be able to specify which config template to render, for example: routers_4331 or routers_881.
How can I specify this in the playbook?
i suppose you have a link between list of hostnames and file.j2 (same number for example)
- find:
path: "path of templates" # give the right folder of templates
file_type: file
patterns: '[0-9]+-.*?\.j2' #search files format xxxx-yyyy.j2
use_regex: yes
register: result
- set_fact:
hosting: "{{ hosting | d([]) + _dico }}"
loop: "{{ result.files | map(attribute='path') | list }}"
vars:
_src: "{{ item }}"
_num: "{{ (_src | basename).split('-') | first }}"
_grp: "{{ 'routers' ~ '_' ~ _num }}"
_hosts: "{{ lookup('vars', _grp) }}"
_dico: >-
{%- set ns = namespace() -%}
{%- set ns.l = [] -%}
{%- for h in _hosts -%}
{%- if h.update({'pathj2': _src}) -%}{%- endif -%}
{%- set ns.l = ns.l + [h] -%}
{%- endfor -%}
{{ ns.l }}
- name: Generate configuration files
template:
src: "{{item.pathj2}}"
dest: ~/ansible/configs/{{item.hostname}}.txt
loop: "{{ hosting }}"
the first task selects files j2 from folder templates (following the regex pattern)
the second task add the corresponding path of file j2 to the file containing the hostnames
Related
I have Ansible PHP role with defined versions and pools:
php_versions:
- 8.0
- 8.1
php_pools:
- name: wiki
version: 8.0
#...
- name: mail
version: 8.1
#...
It produces files /etc/php/8.0/fpm/pool.d/wiki.conf and /etc/php/8.1/fpm/pool.d/mail.conf plus /etc/php/*/fpm/pool.d/_default_pool.conf which role creates by default for every defined PHP version.
Everything works great except deleting and moving pools. If I need to switch the pool wiki from version 8.0 to version 8.1, it crashes. Because the old file /etc/php/8.0/fpm/pool.d/wiki.conf stay on the disk and the new file /etc/php/8.1/fpm/pool.d/wiki.conf will be created in different PHP directory - it crashes on the IP:port conflict (already used by old PHP version).
I need to delete all pools on the disk that are not defined in ansible except _default_pool.conf. Default pools must stay.
I tried:
# main.yml
- name: php versions
include_tasks: php.yml
loop: "{{ php_versions }}"
loop_control:
loop_var: php_version
# php.yml
- name: delete non-ansible pools
become: True
block:
- name: find all exiting php pool files
become: True
find:
paths: "/etc/php/{{ inner_item.php_version }}/fpm/pool.d/"
patterns: '\/etc\/php\/[0-9]\.[0-9]\/fpm\/pool\.d\/(.+(?<!_default_pool))\.conf$'
use_regex: True
register: existing_pool_files
- name: delete non-ansible pool files
become: True
file:
state: absent
path: "{{ item['path'] }}"
with_items:
- "{{ existing_pool_files['files'] | intersect(inner_item.name) }}"
notify: restart PHP
but it doesn't work.
I can delete all pool files on disk and recreate them. But it sounds stupid.
How can I fix it? After a few hours of debugging, I'm out of ideas :-(
For example, given the lists
php_versions: [8.0, 8.1]
php_pools:
- {version: 8.0, name: wiki}
- {version: 8.1, name: mail}
and the tree
shell> tree /etc/php
/etc/php
├── 7.4
│ └── fpm
│ └── pool.d
│ └── keep_this.conf
├── 8.0
│ └── fpm
│ └── pool.d
│ ├── _default_pool.conf
│ ├── mail.conf
│ ├── trash.conf
│ └── wiki.conf
└── 8.1
└── fpm
└── pool.d
├── _default_pool.conf
├── mail.conf
├── trash.conf
└── wiki.conf
Create the list of paths you want to maintain
present_paths_str: |
{% for ver in php_versions %}
- /etc/php/{{ ver }}/fpm/pool.d/
{% endfor %}
present_paths: "{{ present_paths_str|from_yaml }}"
gives
present_paths:
- /etc/php/8.0/fpm/pool.d/
- /etc/php/8.1/fpm/pool.d/
Create the list of present files
php_groups: "{{ dict(php_pools|groupby('version')) }}"
present_pool_files_str: |
{% for ver in php_versions %}
- /etc/php/{{ ver }}/fpm/pool.d/_default_pool.conf
{% for i in php_groups[ver] %}
- /etc/php/{{ ver }}/fpm/pool.d/{{ i.name }}.conf
{% endfor %}
{% endfor %}
present_pool_files: "{{ present_pool_files_str|from_yaml }}"
gives
present_pool_files:
- /etc/php/8.0/fpm/pool.d/_default_pool.conf
- /etc/php/8.0/fpm/pool.d/wiki.conf
- /etc/php/8.1/fpm/pool.d/_default_pool.conf
- /etc/php/8.1/fpm/pool.d/mail.conf
Declare variables
existing_pool_files: "{{ existing_pool.files|map(attribute='path')|list }}"
delete_pool_files: "{{ existing_pool_files|difference(present_pool_files) }}"
and find files
- name: find all exiting php pool files
become: True
find:
paths: "{{ present_paths }}"
patterns: '*.conf'
register: existing_pool
gives
existing_pool_files:
- /etc/php/8.0/fpm/pool.d/wiki.conf
- /etc/php/8.0/fpm/pool.d/_default_pool.conf
- /etc/php/8.0/fpm/pool.d/trash.conf
- /etc/php/8.0/fpm/pool.d/mail.conf
- /etc/php/8.1/fpm/pool.d/wiki.conf
- /etc/php/8.1/fpm/pool.d/_default_pool.conf
- /etc/php/8.1/fpm/pool.d/trash.conf
- /etc/php/8.1/fpm/pool.d/mail.conf
delete_pool_files:
- /etc/php/8.0/fpm/pool.d/trash.conf
- /etc/php/8.0/fpm/pool.d/mail.conf
- /etc/php/8.1/fpm/pool.d/wiki.conf
- /etc/php/8.1/fpm/pool.d/trash.conf
Delete the redundant files
- name: delete non-ansible pool files
become: True
file:
state: absent
path: "{{ item }}"
loop: "{{ delete_files }}"
notify: restart PHP
Example of a complete playbook for testing
- hosts: localhost
vars:
php_versions: [8.0, 8.1]
php_pools:
- {version: 8.0, name: wiki}
- {version: 8.1, name: mail}
present_paths_str: |
{% for ver in php_versions %}
- /etc/php/{{ ver }}/fpm/pool.d/
{% endfor %}
present_paths: "{{ present_paths_str|from_yaml }}"
php_groups: "{{ dict(php_pools|groupby('version')) }}"
present_pool_files_str: |
{% for ver in php_versions %}
- /etc/php/{{ ver }}/fpm/pool.d/_default_pool.conf
{% for i in php_groups[ver] %}
- /etc/php/{{ ver }}/fpm/pool.d/{{ i.name }}.conf
{% endfor %}
{% endfor %}
present_pool_files: "{{ present_pool_files_str|from_yaml }}"
existing_pool_files: "{{ existing_pool.files|map(attribute='path')|list }}"
delete_pool_files: "{{ existing_pool_files|difference(present_pool_files) }}"
tasks:
- debug:
var: php_groups
- debug:
var: present_paths
- debug:
var: present_pool_files
- name: find all exiting php pool files
become: True
find:
paths: "{{ present_paths }}"
patterns: '*.conf'
register: existing_pool
- debug:
var: existing_pool_files
- debug:
var: delete_pool_files
- name: delete non-ansible pool files
become: True
file:
state: absent
path: "{{ item }}"
loop: "{{ delete_files }}"
notify: restart PHP
repo/
├─ cells/
│ ├─ cell1/
│ │ ├─ enabled-sites/
│ │ │ ├─ example2.conf
│ │ │ ├─ example1.conf
│ │ ├─ site.conf
├─ workgroups/
│ ├─ wg1/
│ │ ├─ enabled-sites/
│ │ │ ├─ example3.conf
│ │ │ ├─ example4.conf
│ │ │ ├─ example5.conf
│ │ ├─ site.conf
I'm trying to write a role which will template the first things it finds
All the above would exist on the ansible host not the remote
So in the above example if repo/cells/cell1/enabled-sites contains files they'll be templated to a remote machine
If they don't exist it would look for repo/cells/cell1/site.conf which would be templated
If that didn't exist it would look for files in repo/workgroups/wg1/enabled-sites
If that didn't exist if would look for ropo/workgroups/wg1/site.conf
Finally if none of those exist it would use the roles template site.conf
- name: Configure | Find first enabled-sites
find:
paths: "{{ item }}"
with_first_found:
- "{{ cvs_path }}/scripts/def/cells/{{ grouping.name }}/enabled-sites/"
- "{{ cvs_path }}/scripts/def/cells/{{ grouping.name }}/site.conf"
- "{{ cvs_path }}/scripts/def/workgroups/{{ grouping.workgroup }}/enabled-sites/"
- "{{ cvs_path }}/scripts/def/cells/{{ grouping.name }}/site.conf"
- "../templates/site.conf"
register: esites
delegate_to: localhost
run_once: true
- name: Configure | Template out found cell specific enabled-sites
template:
src: "{{ item.path }}"
dest: "{{ httpd_home }}/conf/enabled-sites/{{ item.path | basename }}"
with_items: "{{ esites.results | map(attribute='files') | list }}"
This only works for the directories, the find doesn't work on files
There seems to be another option within "with_first_found" loop.
Please have a look at bellow example
- name: Include tasks only if one of the files exist, otherwise skip the task
ansible.builtin.include_tasks:
file: "{{ item }}"
with_first_found:
- files:
- path/tasks.yaml
- path/other_tasks.yaml
There is another attribute as "paths" similar to "files" not sure can we combine them together.
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/first_found_lookup.html
The best I've come up with is:
- name: Configure | Find first enabled-sites
find:
paths: "{{ item }}"
with_first_found:
- "{{ cvs_path }}/scripts/def/cells/{{ grouping.name }}/enabled-sites/"
- "{{ cvs_path }}/scripts/def/workgroups/{{ grouping.workgroup }}/enabled-sites/"
register: esites
delegate_to: localhost
run_once: true
ignore_errors: true
- name: Configure | Template out found cell specific enabled-sites
template:
src: "{{ item.path }}"
dest: "{{ httpd_home }}/conf/enabled-sites/{{ item.path | basename }}"
with_items: "{{ esites.results | map(attribute='files') | list }}"
when: esites.results is defined
- name: Configure | Template out single site.conf
template:
src: "{{ item }}"
dest: "{{ httpd_home }}/conf/enabled-sites/{{ grouping.environment }}{{ grouping.workgroup }}.conf"
with_first_found:
- "{{ cvs_path }}/scripts/def/cells/{{ grouping.name }}/site.conf"
- "{{ cvs_path }}/scripts/def/workgroups/{{ grouping.workgroup }}/site.conf"
- "site.conf"
when: esites.results is not defined
It doesn't quite get the ordering correct but gets the job done
Is there a way of using output values of a module that is located in another folder? Imagine the following environment:
tm-project/
├── lambda
│ └── vpctm-manager.js
├── networking
│ ├── init.tf
│ ├── terraform.tfvars
│ ├── variables.tf
│ └── vpc-tst.tf
├── prd
│ ├── init.tf
│ ├── instances.tf
│ ├── terraform.tfvars
│ └── variables.tf
└── security
└── init.tf
I want to create EC2 instances and place them in a subnet that is declared in networking folder. So, I was wondering if by any chance I could access the outputs of the module I used in networking/vpc-tst.tf as the inputs of my prd/instances.tf.
Thanks in advances.
You can use a outputs.tf file to define the outputs of a terraform module. Your output will have the variables name such as the content below.
output "vpc_id" {
value = "${aws_vpc.default.id}"
}
These can then be referenced within your prd/instances.tf by referencing the resource name combined with the output name you defined in your file.
For example if you have a module named vpc which uses this module you could then use the output similar to below.
module "vpc" {
......
}
resource "aws_security_group" "my_sg" {
vpc_id = module.vpc.vpc_id
}
This is my first time using nested Helm charts and I'm trying to access a global value from the root values.yaml file. According to the documentation I should be able to use the syntax below in my secret.yaml file, however if I run helm template api --debug I get the following error:
Error: template: api/templates/secret.yaml:7:21: executing "api/templates/secret.yaml" at <.Values.global.sa_json>: nil pointer evaluating interface {}.sa_json
helm.go:84: [debug] template: api/templates/secret.yaml:7:21: executing "api/templates/secret.yaml" at <.Values.global.sa_json>: nil pointer evaluating interface {}.sa_json
/primaryChart/charts/api/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Chart.Name }}-service-account-secret
type: Opaque
data:
sa_json: {{ .Values.global.sa_json }}
primaryChart/values.yaml
global:
sa_json: _b64_sa_credentials
Folder structure is as follows:
/primaryChart
|- values.yaml
|-- /charts
|-- /api
|-- /templates
|- secret.yaml
Having the following directory layout, .Values.global.sa_json will only be available if you call helm template api . from your main chart
/mnt/c/home/primaryChart> tree
.
├── Chart.yaml <-- your main chart
├── charts
│ └── api
│ ├── Chart.yaml <-- your subchart
│ ├── charts
│ ├── templates
│ │ └── secrets.yaml
│ └── values.yaml
├── templates
└── values.yaml <--- this is where your global.sa_json is defined
Your values file should be called values.yaml and not value.yaml, or use any other file with -f flag helm template api . -f value.yaml
/mnt/c/home/primaryChart> helm template api .
---
# Source: primaryChart/charts/api/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: api-service-account-secret
type: Opaque
data:
sa_json: _b64_sa_credentials
I followed the steps line by line in the documentation, but i keep getting this error:
Your WSGIPath refers to a file that does not exist.
Here is my '.config' file: (except for the appname and the keys)
container_commands:
01_syncdb:
command: "python manage.py syncdb --noinput"
leader_only: true
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: [myapp]/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: [myapp].settings
- option_name: AWS_SECRET_KEY
value: XXXX
- option_name: AWS_ACCESS_KEY_ID
value: XXXX
I googled around and found that someone else had a similar problem and they solved it by editing the 'optionsettings.[myapp]', I don't want to delete something I need, but here is what I have:
[aws:autoscaling:asg]
Custom Availability Zones=
MaxSize=1
MinSize=1
[aws:autoscaling:launchconfiguration]
EC2KeyName=
InstanceType=t1.micro
[aws:autoscaling:updatepolicy:rollingupdate]
RollingUpdateEnabled=false
[aws:ec2:vpc]
Subnets=
VPCId=
[aws:elasticbeanstalk:application]
Application Healthcheck URL=
[aws:elasticbeanstalk:application:environment]
DJANGO_SETTINGS_MODULE=
PARAM1=
PARAM2=
PARAM3=
PARAM4=
PARAM5=
[aws:elasticbeanstalk:container:python]
NumProcesses=1
NumThreads=15
StaticFiles=/static/=static/
WSGIPath=application.py
[aws:elasticbeanstalk:container:python:staticfiles]
/static/=static/
[aws:elasticbeanstalk:hostmanager]
LogPublicationControl=false
[aws:elasticbeanstalk:monitoring]
Automatically Terminate Unhealthy Instances=true
[aws:elasticbeanstalk:sns:topics]
Notification Endpoint=
Notification Protocol=email
[aws:rds:dbinstance]
DBDeletionPolicy=Snapshot
DBEngine=mysql
DBInstanceClass=db.t1.micro
DBSnapshotIdentifier=
DBUser=ebroot
The user who solved that problem deleted certain lines and then did 'eb start'. I deleted the same lines that the original user said they deleted, but when I 'eb start'ed it I got the same exact problem again.
If anybody can help me out, that would be amazing!
I was having this exact problem all day yesterday and I am using ubuntu 13.10.
I also tried deleting the options file under .ebextensions to no avail.
What I believe finally fixed the issue was under
~/mysite/requirements.txt
I double checked what the values were after I was all set and done doing eb init and eb start and noticed they were different from what http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html mentioned at the beginning of the tutorial.
The file was missing the MySQL line when I checked during the WSGIPath problem, I simply added the line :
MySQL-python==1.2.3
and then committed all the changes and it worked.
If that doesn't work for you, below are the .config file settings and the directory structure.
My .config file under ~/mysite/.ebextensions is exactly what was in the tutorial, minus the secret key and access key, you need to replace those with your own:
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: mysite/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: mysite.settings
- option_name: AWS_SECRET_KEY
value: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
- option_name: AWS_ACCESS_KEY_ID
value: AKIAIOSFODNN7EXAMPLE
My requirements.txt:
Django==1.4.1
MySQL-python==1.2.3
argparse==1.2.1
wsgiref==0.1.2
And my tree structure. This starts out in ~/ so if I were to do
cd ~/
tree -a mysite
You should get the following output, including a bunch of directories under .git ( I removed them because there is a lot):
mysite
├── .ebextensions
│ └── myapp.config
├── .elasticbeanstalk
│ ├── config
│ └── optionsettings.mysite-env
├── .git
├── .gitignore
├── manage.py
├── mysite
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── wsgi.py
│ └── wsgi.pyc
└── requirements.txt