I want to use a template to create config files in /etc/network/interfaces.d/
But I got an error when I use with_items in a template task...
I'm not sure for the ansible_host.networks in with_items.
Thanks !
Inventory.yml :
proxmoxve: # group
hosts:
virtu:
networks:
internet:
interface: enp9s0
mode: manual
type: interface
openvswitch:
interface: vmbr1
mode: static
type: ovs_bridge
sauv:
networks:
internet:
interface: enp38s0
mode: manual
type: interface
openvswitch:
interface: vmbr1
mode: static
type: ovs_bridge
Playbook.yml :
---
- hosts: proxmoxve
tasks:
- name: "Install openvswitch with fresh cache"
apt:
name: openvswitch-switch
state: present
update_cache: yes
- name: "Set internet interfaces"
template:
src: templates/interfaces.j2
dest: "/etc/network.interfaces.d/{{ item.interface }}"
whith_items: "{{ ansible_host.networks }}"
Error :
ERROR! conflicting action statements: template, whith_items
The error appears to be in '/home/yanux/dev/ansible-proxmoxve/proxmoxve_config_networks.yml': line 10, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: "Set internet interface to manual"
^ here
There seems to be a typo: whith_items should be with_items.
Related
I have created an apps cluster deployment on AWS EKS that is deployed using Helm. For proper operation of my app, I need to set env variables, which are secrets stored in AWS Secrets manager. Referencing a tutorial, I set up my values in values.yaml file someway like this
secretsData:
secretName: aws-secrets
providerName: aws
objectName: CodeBuild
Now I have created a secrets provider class as AWS recommends: secret-provider.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secret-provider-class
spec:
provider: {{ .Values.secretsData.providerName }}
parameters:
objects: |
- objectName: "{{ .Values.secretsData.objectName }}"
objectType: "secretsmanager"
jmesPath:
- path: SP1_DB_HOST
objectAlias: SP1_DB_HOST
- path: SP1_DB_USER
objectAlias: SP1_DB_USER
- path: SP1_DB_PASSWORD
objectAlias: SP1_DB_PASSWORD
- path: SP1_DB_PATH
objectAlias: SP1_DB_PATH
secretObjects:
- secretName: {{ .Values.secretsData.secretName }}
type: Opaque
data:
- objectName: SP1_DB_HOST
key: SP1_DB_HOST
- objectName: SP1_DB_USER
key: SP1_DB_USER
- objectName: SP1_DB_PASSWORD
key: SP1_DB_PASSWORD
- objectName: SP1_DB_PATH
key: SP1_DB_PATH
I mount this secret object in my deployment.yaml, the relevant section of the file looks like this:
volumeMounts:
- name: secrets-store-volume
mountPath: "/mnt/secrets"
readOnly: true
env:
- name: SP1_DB_HOST
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_HOST
- name: SP1_DB_PORT
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_PORT
further down in same deployment file, I define secrets-store-volume as :
volumes:
- name: secrets-store-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aws-secret-provider-class
All drivers are installed into cluster and permissions are set accordingly
with helm install mydeployment helm-folder/ --dry-run I can see all the files and values are populated as expected. Then with helm install mydeployment helm-folder/ I install the deployment into my cluster but with kubectl get all I can see the pod is stuck at Pending with warning Error: 'aws-secrets' not found and eventually gets timeout. In AWS CloudTrail log, I can see that the cluster made request to access the secret and there was no error fetching it. How can I solve this or maybe further debug it? Thank you for your time and efforts.
Error: 'aws-secrets' not found - looks like CSI Driver isn't creating kubernetes secret that you're using to reference values
Since yaml files looks correctly, I would say it's probably CSI Driver configuration Sync as Kubernetes secret - syncSecret.enabled (which is false by default)
So make sure that secrets-store-csi-driver runs with this flag set to true, for example:
helm upgrade --install csi-secrets-store \
--namespace kube-system secrets-store-csi-driver/secrets-store-csi-driver \
--set grpcSupportedProviders="aws" --set syncSecret.enabled="true"
So I am trying take values from file, let's call it "test.yaml"
file looks like this (sorry for long output, but it is the shortest cut containing all patterns and structure):
---
results:
- failed: false
item: XXX.XX.XX.XX
invocation:
module_args:
validate_certs: false
vm_type: vm
show_tag: false
username: DOMAIN\domain-user
proxy_host:
proxy_port:
show_attribute: false
password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
port: XXX
folder:
hostname: XXX.XX.XX.XX
changed: false
virtual_machines:
- ip_address: XXX.XX.XX.XX
mac_address:
- XX:XX:XX:aa:XX:XX
uuid: XXXX-XX-XX-XXXX-XXXXX
guest_fullname: Red Hat Enterprise Linux X (XX-bit)
moid: vm-XXX
folder: "/DOMAIN-INTERXION/vm"
cluster:
attributes: {}
power_state: poweredOn
esxi_hostname: esx.hostname
tags: []
guest_name: VMnameXX
vm_network:
XX:XX:XX:aa:XX:XX:
ipv6:
- XX::XXX:XX:XXXX
ipv4:
- XXX.XX.XX.XX
I would like, for example to have something like:
results.invocation.virtual_machines.ip_address
results.invocation.module_args.user_name
I tried all kind of stuff but it doesn't work :)
last attempt is this:
---
- name: demo how register works
hosts: localhost
tasks:
- name: Include all .json and .jsn files in vars/all and all nested directories (2.3)
include_vars:
file: test.yml
name: vm
- name: debug
debug:
msg: "{{ item.0.item }}"
with_subelements:
- "{{ vm.results }}"
- virtual_machines
register: subelement
following your structure and after fixing some errors:
results.invocation.virtual_machines.ip_address is results[0].virtual_machines[0].ip_address
and
results.invocation.module_args.user_name is results[0].invocation.module_args.username
(results and virtual_machines are arrays, write results[0] or results.0 is same)
so a sample of playbook doing job:
- name: vartest
hosts: localhost
tasks:
- name: Include all .json and .jsn files in vars/all and all nested directories (2.3)
include_vars:
file: test.yml
name: vm
- name: ip
set_fact:
ipadress: "{{ vm.results[0].virtual_machines[0].ip_address }}"
- name: username
set_fact:
username: "{{ vm.results[0].invocation.module_args.username }}"
- name: display
debug:
msg: "ip: {{ ipadress }} and username: {{ username }}"
result:
ok: [localhost] =>
msg: 'ip: XXX.XX.XX.XX and username: DOMAIN\domain-user'
In my main pipeline in one stage, I call the same (deployment) template twice with just a bit different data:
//pipeline.yml
- stage: dev
condition: and(succeeded(), eq('${{ parameters.environment }}', 'dev'))
variables:
getCommitDate: $[ stageDependencies.prepare_date.set_date.outputs['setCommitDate.rollbackDate'] ]
jobs:
- template: mssql/jobs/liquibase.yml#templates
parameters:
command: update
username: $(username_dev)
password: $(password_dev)
environment: exampleEnv
databaseName: exampleDB
databaseIP: 123456789
context: dev
checkoutStep:
bash: git checkout ${{parameters.commitHash}} -- ./src/main/resources/objects
- template: mssql/jobs/liquibase.yml#templates
parameters:
command: rollbackToDate $(getCommitDate)
username: $(username_dev)
password: $(password_dev)
environment: exampleEnv
databaseName: exampleDB
databaseIP: 123456789
context: dev
//template.yml
parameters:
- name: command
type: string
- name: environment
type: string
- name: username
type: string
- name: password
type: string
- name: databaseName
type: string
- name: databaseIP
type: string
- name: context
type: string
- name: checkoutStep
type: step
default:
checkout: self
jobs:
- deployment: !MY PROBLEM!
pool:
name: exampleName
demands:
- agent.name -equals example
environment: ${{ parameters.environment }}
container: exampleContainer
strategy:
runOnce:
deploy:
steps:
...
My problem is that the deployment cannot have the same name twice.
It is not possible to use the ${{parameters.command}} to distinguish between deployments names, because it contains forbidden characters. Only ${{parameters.command}} differs between two calls.
My question is whether it is possible to distinguish the name of a deployment other way than passing another parameter (e.g. jobName: ). I have tried various conditions and predefined variables but without success.
Additionally, I should add DependsOn so that the second template is called for sure after the first.
It is not possible because getCommitDate and thus command parameter in your second stage contains runtime expression and job name needs compile time expression. So if you use command as job name at compile you have there rollbackToDate $(getCommitDate).
To solve this issue, the job identifier should be empty in a template:
- job: # Empty identifier
More informations available HERE
I have my ansible task running in all my api_servers which i would restrict it to run only on one IP (one of the api_server)
I have added run_once: true but it didnt helps.
Kindly advise.
EDIT :
Will the below work? I have 10 instances of app_servers running, I want the task to run only on one app_server
run_once: true
when:
- inventory_hostname == groups['app_servers'][0]
Where my inventory file is like
[app_servers]
prod_app_[1:4]
I would write my playbook like that:
---
# Run on all api_servers
- hosts: api_servers
tasks:
- name: do something on all api_servers
#...
# Run only on one api_server e.q. api_server_01
- hosts: api_server_01
tasks:
- name: Gather data from api_server_01
#...
The other option would be to work with when: or to run the playbook with the --limit option
---
- hosts: all
tasks:
- name: do something only when on api_server_01
#...
when: inventory_hostname == "api_server_01"
EDIT:
Here you will see all the option in one example:
---
- hosts: all
tasks:
- debug: msg="run_once"
run_once: true
- debug: msg=all
- debug: msg="run on the first of the group"
when: inventory_hostname == groups['app_servers'][0]
# Option with separated hosts, this one will be faster if gather_facts is not tuned.
- hosts: app_servers[0]
tasks:
- debug: msg="run on the first of the group"
(Since I can not comment, I have to answer.)
What about delegate_to? Where you delegate the task to a specific host.
hosts: all
tasks:
- name: Install vim on specific host
package:
name: vim
state: present
delegate_to: staging_machine
Or
As #user2599522 mentioned: --limit is also an option to use:
You can also limit the hosts you target on a particular run with the --limit flag. (Patterns and ansible-playbook flags)
With Test Kitchen, in the yaml configs... where is the best place to store globally used attributes that apply to multiple platforms and multiple suites?
To use my .kitchen.yml as an example:
---
provisioner:
name: chef_solo
platforms:
- name: centos-6.5
driver:
name: vagrant
- name: amazon
driver:
name: ec2
image_id: ami-ed8e9284
flavor_id: t2.medium
aws_ssh_key_id: <snip>
ssh_key: <snip>
availability_zone: us-east-1a
subnet_id: subnet-<snip>
require_chef_omnibus: true
iam_profile_name: <snip>
ebs_delete_on_termination: true
security_group_ids: sg-<snip>
# area in question (does not work here)
attributes:
teamcity:
server: 'build.example.com'
port: 80
username: 'example'
password: 'example'
# end area in question
suites:
- name: resin4
run_list:
- recipe[example_server::resin4]
- recipe[example_server::deploy_all_artifacts]
- name: deploy
run_list:
- recipe[example_server::deploy_all_artifacts]
- name: default
run_list:
- recipe[example_server::elasticsearch]
- recipe[example_server::resin4]
- recipe[example_server::deploy_all_artifacts]
I know there are other kitchen files, such as ~/kitchen/config.yml and .kitchen.local.yml but I've been unable to find a away to get attributes to apply to all platforms and suites. Is copy and pasting attributes to platforms the best way?
Is there a reason to specify these attributes in kitchen's yaml and not recipe[example_server::deploy_all_artifacts]? If necessary you could set overrides in kitchen.
Also, this post might be helpful: Access Attributes Across Recipes