I am trying add some keys to my root authorized_keys on the instance, but looks like it is overwriting the list and only stick the last key,
Any one know how to sort this ?
- name: Set authorized key
authorized_key:
user: root
state: present
key: "{{item}}"
loop: "{{keys}}"
the vars file is
keys:
- "https://gitlab.com/user1.keys"
- "https://github.com/user2.keys"
Q: "Looks like it is overwriting the list and only stick the last key, Anyone know how to sort this ?"
A: By default authorized_key does not remove non-specified keys from the authorized_keys file. See parameter exclusive. Make sure what data you feed key with.
From the documentation, under the exclusive option:
Whether to remove all other non-specified keys from the
authorized_keys file. Multiple keys can be specified in a single key
string value by separating them by newlines. This option is not loop
aware, so if you use with_ , it will be exclusive per iteration of the
loop. If you want multiple keys in the file you need to pass them all
to key in a single batch as mentioned above.
This means that you can achieve what you want by using a Jinja join filter on your array:
- name: Set authorized key
authorized_key:
user: root
state: present
key: "{{ keys | join('\n') }}"
Related
I have an IstioOperator deployment with logs enabled in JSON format:
spec:
meshConfig:
accessLogFile: /dev/stdout
accessLogEncoding: JSON
No specific accessLogFormat is defined so default one applies.
[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %RESPONSE_CODE_DETAILS% %CONNECTION_TERMINATION_DETAILS%
\"%UPSTREAM_TRANSPORT_FAILURE_REASON%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\"
\"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%\n
However, what i want is to add another field at the end of log by the name PATH_MAIN which is derived from original path attribute but based on same regex (regex patterns already figured out) it would alter some values, such as redacting GUIDs etc.
My question is, how can I, if possible define a new field in Log Format by giving another field as attribute and defining its value based on regex.
In one of my deployment files, I want to set an environment variable. The variable is KUBE_VERSION and values must be fetched from a ConfigMap.
kube_1_21: 1.21.10_1550
This is part of ConfigMap where I want to set 1.21.10_1550 to KUBE_VERSION, but if the cluster is of IKS 1.20, then the key will be:
kube_1_20: 1.20.21_3456
kube_ is always static. How can I set environment variable using a regex expression?
Something of this sort:
- name: KUBE_VERSION
valueFrom:
configMapKeyRef:
name: cluster-info
key: "kube_1*"
As far as I know it is unfortunately not possible to use the regular expression as you would like. Additionally, you have information about the regular expression that validates the entered data:
regex used for validation is '[-._a-zA-Z0-9]+')
It follows that you have to enter key as an alphanumeric string and additionally you can use the characters -, _ and . So it is not possible to use regex in this place.
To workaround you can write your custom script i.e. in Bash and replace the proper line with sed command.
I have a variable declared, and I am trying to get its value to show up in the key, when attempting to use the ec2_instance_info ansible module. I want the value to show up under tags. Please view the dummy code below.
vars:
tag_key: Key
tag_value: Value
tasks:
- name:
ec2_instance_info:
filters:
"tag: {{ tag_key }}": "{{ tag_value }}"
I want the above to output as:
tag:Key:Value
But instead it comes out as:
tag:{{ tag_key }}:Value
As a result, when I run the commands, it doesn't call any instances, since they're searching for the wrong thing. The code works fine when I swap the variables out for regular strings. (I'm aware the syntax is probably wrong in the dummy code, I've tried a bunch of things now.)
I attempted the following: Ansible variable in key/value key And while it works in displaying the variables, it now registers as a dict and I get the error:
Invalid type for parameter Filters[0].Values, value: {'Key': 'Value'}, type: <type 'dict'>, valid types: <type 'list'>, <type 'tuple'>"
So I guess I'm looking for either a way to use variables in key names without it turning to a dict, and if that's not available, to transform that into a list. Thanks in advance.
The filters of ec2_instance_info module requires a "dict". So one way to supply that "dict" is to create one in vars:.
Something like:
vars:
ec2_filters:
"tag:Name": "my-instance-1"
tasks:
- ec2_instance_info:
filter: "{{ ec2_filters }}"
register: ec2_out
- debug:
var: ec2_out
Or call the nested variables as a dict inside filters:
vars:
tag_key: Key
tag_value: Value
tasks:
- ec2_instance_info:
filter:
'{ "tag:{{ tag_key }}": "{{ tag_value }}" }'
register: ec2_out
I've written a piece of code that adds and retrieves entities from the Datastore based on one filter (and order on the same property) - that worked fine. But when I tried adding filters for more properties, I got:
PreconditionFailed: 412 no matching index found. recommended index is:- kind: Temperature properties: - name: DeviceID - name: created
Eventually I figured out that I need to create index.yaml. Mine looks like this:
indexes:
- kind: Temperature
ancestor: no
properties:
- name: ID
- name: created
- name: Value
And it seems to be recognised, as the console shows:
that it has been updated
Yet when I run my code (specifically this part with two properties), it doesn't work (still getting the above-mentioned error) (the code is running on the Compute Engine).
query.add_filter('created', '>=', newStart)
query.add_filter('created', '<', newEnd)
query.add_filter('DeviceID', '=', devID)
query.order = ['created']
Trying to run the same query on the console produces
Your Datastore does not have the composite index (developer-supplied) required for this query.
error. Search showed one other person who had the same issue and he managed to fix it by changing the order of the properties in the index.yaml, but that is not helping in my case. Has anybody encountered a similar problem or could help me with the solution?
You'll need to create the exact index suggested in the error message:
- kind: Temperature
ancestor: no
properties:
- name: DeviceID
- name: created
Specifically, the first property in the index needs to be DeviceID instead of ID and the last property in the index needs to be the one you're using in the inequality filter (so you can't have Value as the last property in the index).
I need something like (ansible inventory file):
[example]
127.0.0.1 timezone="Europe/Amsterdam" locales="en_US","nl_NL"
However, ansible does not recognize 'locales' as a list.
You can pass a list or object like this:
[example]
127.0.0.1 timezone="Europe/Amsterdam" locales='["en_US", "nl_NL"]'
With complex variables, it's best to define them in a host_vars file rather than in the inventory file, since host_vars files support YAML syntax.
Try creating a host_vars/127.0.0.1 file with the following content:
---
timezone: Europe/Amsterdam
locales:
- en_US
- nl_NL
Ryler's answer is good in this specific case but I ran into problems using other variations with the template module.
[example]
127.0.0.1 timezone="Europe/Amsterdam" locales='["en_US", "nl_NL"]'
Is his original example and works fine.
The following variations work with template. Basically if it's a string you must remember to use the internal double quotes or the entire structure is parsed as a single string. If it's only numbers or "True" or "False" (not "yes") then you're fine. In this variation I couldn't make it work with template if it had external quotes.
I haven't done an exhaustive check of which internal use cases they do and do not break other than the template module.
I am using Ansible 2.2.1.
[example:vars]
# these work
myvar1=["foo", "bar"]
myvar2=[1,2]
myvar3=[True,False]
# These fail, they get interpreted as a single string.
myvar4=[yes, no]
myvar5=[foo,bar]
myvar6='["foo", "bar"]'
you can try split
#inventory file
[example]
127.0.0.1 timezone="Europe/Amsterdam" locales="en_US","nl_NL"
#role file
---
- debug: msg="{{ item }}"
with_items: locales.split(',')
I believe the case is where you define your variable.
if it is under a
[host:vars]
var=["a", "b"]
otherwise:
[hosts]
host1 var='["a", "b"]'
INI file with variables looks like this
$ cat ./vars/vars.yml
lvol_names=['2g-1','2g-2','2g-3']
the variable represents the list type
lvol_names:
- 2g-1
- 2g-2
- 2g-3
the variable can be read from a playbook via lookup:
$ cat ./play.yml
- name: play1
hosts: kub2_data_nodes
become: yes
vars:
- lvol_names: "{{ lookup('ini', 'lvol_names type=properties file=./vars/vars.yml') }}"
You can custom a filter, to split string to list
Github ansible example show how to create custom filter.