I'm trying to use a regex comparisons in the workflow:rules section of my gitlab-ci file, but it doesn't seem to be working. Here is a basic version:
stages:
- prep
variables:
VAR1: "no value"
APPURL: "no value"
workflow: #Goal: only run pipeline for push events set some variables based on branch/commit_ref_name
rules:
- if: "CI_PIPELINE_SOURCE == "push"
- if: "CI_COMMIT_REF_NAME =~ /^dev/
variables:
VAR1: "Dev Value"
APPURL: "https://devurl.com"
test_job:
stage: prep
image: runner.image/url
script:
- echo "$VAR1"
- echo "APPURL"
When push a change from a branch named something like "dev1-jirastory", the test job output says "no value" for both variables. So, its not catching the commit_ref_name rule for some reason.
Can someone tell me if you can use regex comparisons in the workflow:rules statements? All the stuff I've found so far refers to job rules. As I want these variables set for multiple jobs, I want to set them for the entire workflow and subsequent jobs, not use the same rules in every single job, which can grow and not be managable.
I did try accomplishing those value determinations in a root "before_script" section, but that gets overwritten if I need to have to do other actions in a before_script for any individual job, so that won't work for me either.
Lastly, if anyone can tell me if I can do any "command" statements for parsing the commit_ref_name, that would be great. I'd love to do something like:
"$CI_COMMIT_REF_NAME" | awk -F "-" '{print $1}')
to pull out the "dev1" portion of the ref name like my sample above for use in jobs as well.
Thanks in advance.
Using variables in workflow works, just a couple of small changes needed for your pipeline:
stages:
- prep
variables:
VAR1: "no value"
APPURL: "no value"
workflow: #Goal: only run pipeline for push events set some variables based on branch/commit_ref_name
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_COMMIT_REF_SLUG =~ /^dev/'
variables:
VAR1: "Dev Value"
APPURL: "https://devurl.com"
test_job:
stage: prep
script:
- echo "$VAR1"
- echo "$APPURL"
You where missing the $ before the variables in you rules clauses. And I would recommend to use $CI_COMMIT_REF_SLUG instead of CI_COMMIT_REF_NAME to be independent of the case.
Here is the output that I'm trying to parse:
hostname#show bgp vrf vrfname summary | i 1.1
BGP Route Distinguisher: 1.1.1.1:0
BGP router identifier 1.1.1.1, local AS number 2222
1.1.1.3 0 64512 349608 316062 896772 0 0 2w4d 1
I have the following regex that succesfully matches just the last line. Now I need to split that line and view the last index. In this case it is "1", but I will want to fail if that value is "0".
- name: debug test
debug:
msg: "{{show_bgp_sessions.data | regex_findall('\\d+\\.\\d+\\.\\d+\\.\\d+\\s\\s.*')}}"
I tried adding a split in a couple different formats at the end of the "msg" line so that I can grab the last index to compare it in the failed_when statement:
msg: "{{show_bgp_sessions.data | regex_findall('\\d+\\.\\d+\\.\\d+\\.\\d+\\s\\s.*') | split(' ')}}"
But I'm getting the following error msg:
"template error while templating string: no filter named 'split'. String:
I've also tried to use a few different forms of "ends_with" to verify the last index in the string as I've used that a lot in my python experience, but I can't get it to work in ansible.
I can't create a new task to parse the data and perform the split seperately because I need to run this verification through a loop.
When you select the line, reverse the string, and split the first item. For example
msg: "{{ (my_line|reverse).split()|first }}"
Possibly the regex provided by #Thefourthbird is a better solution.
But for your issue at hand, this is caused by the fact that there is indeed no filter split in Jinja, see the list there: https://jinja.palletsprojects.com/en/2.11.x/templates/#list-of-builtin-filters.
The reason why there is no such a filter is simple: split() is a function of the Python String, and since Jinja is Python, you can just use it as is.
Also mind that, since regex_findall is meant for multiple matches, you'll have to select the first element of the list, for example, with the filter first.
So your message ends up being:
msg: >-
{{
(
show_bgp_sessions.data
| regex_findall('\\d+\\.\\d+\\.\\d+\\.\\d+\\s\\s.*')
| first
).split()
}}
Given the playbook:
- hosts: all
gather_facts: no
vars:
show_bgp_sessions:
data: |
hostname#show bgp vrf vrfname summary | i 1.1
BGP Route Distinguisher: 1.1.1.1:0
BGP router identifier 1.1.1.1, local AS number 2222
1.1.1.3 0 64512 349608 316062 896772 0 0 2w4d 1
tasks:
- debug:
msg: >-
{{
(
show_bgp_sessions.data
| regex_findall('\\d+\\.\\d+\\.\\d+\\.\\d+\\s\\s.*')
| first
).split()
}}
Gives the recap:
TASK [debug] ***************************************************************
ok: [localhost] => {
"msg": [
"1.1.1.3",
"0",
"64512",
"349608",
"316062",
"896772",
"0",
"0",
"2w4d",
"1"
]
}
I'm trying to use ansible to update telegraf.conf's [[inputs.ping]].
telegraf.conf looks like the following:
[[inputs.ping]]
urls = ["tac-temp1","tac-temp2", "tac-temp3","tac-temp4"] #tac
count = 30
timeout = 15.0
[inputs.ping.tags]
name = "tac"
[[inputs.ping]]
urls = ["prod-temp1","prod-temp2", "prod-temp3","prod-temp4"] #prod
count = 30
timeout = 15.0
[inputs.ping.tags]
name = "prod"
[[inputs.ping]]
urls = ["test-temp1","test-temp2", "test-temp3","test-temp4"] #test
count = 30
timeout = 15.0
[inputs.ping.tags]
name = "test"
I'm trying to add ,"tac-temp10" after ,"tac-temp4" in line 2 shown above.
- hosts: Servers
become: yes
become_method: sudo
tasks:
- name: Loading telegraf.conf content for search
shell: cat /tmp/telegraf.conf
register: tele_lookup
- name: Adding Server to /tmp/telegraf.conf if does not exists
lineinfile:
path: /tmp/telegraf.conf
state: present
regexp: '^((.*)"] #tac$)'
line: ',"tac-temp10"'
backup: yes
when: tele_lookup.stdout.find('tac-temp10') != '0'
regexp: '^((.*)"] #tac$)' is replacing the whole line with ,"tac-temp10". Expected output:
[[inputs.ping]]
urls = ["tac-temp1","tac-temp2", "tac-temp3","tac-temp4","tac-temp10"] #tac
count = 30
timeout = 15.0
[inputs.ping.tags]
name = "tac"
Warning: Ugly regexp ahead. Beware of unpredictable understanding for next guys (including you after time passed by...) doing maintenance.
The following will add your server at the end of the list if it is not already present (anywhere in the list) with a single idempotent task.
- name: add our server if needed
lineinfile:
path: /tmp/test.conf
backup: yes
state: present
regexp: '^( *urls *= *\[)(("(?!tac-temp10)([a-zA-Z0-9_-]*)",? *)*)(\] #tac)$'
backrefs: yes
line: '\1\2, "tac-temp10"\5'
You need to use backreferences to put back on the line the already matched parts of the expression. I used backup: yes so I could easily come back to the original for my tests. Feel free to drop it.
As you can see (and as advised in my warning) this is pretty much impossible to understand for anyone having to quickly read the code. If you have to do anything more fancy/complicated, consider using a template and storing your server list in a variable somewhere.
I am using kitchen with ec2 driver. I would like to add Name tag to ec2 instances based on the instance name kitchen creates. If I had a 'default' suite and was using centos7.2, kitchen list would name the instance 'default-centos-72'.
I could hard code something like this:
suites:
- name: default
driver_config:
tags: { "Name": "kitchen-default-centos-72" }
But what I'd really like is something like this:
suites:
- name: default
driver_config:
tags: { "Name": <%= figure out instance name and prepend kitchen- %> }
My example suggests using ERB which seems like the way to go to me. But I can't seem to figure out what code to use to get the name of the instance. I tried using a bit of Kitchen::Config.new... but couldn't figure out something that worked. Any suggestions would be much appreciated.
Took me a while but I finally ran across an example that may have showed me the light. While looking through the InSpec options for kitchen I found you can have it output a results file with the platform and suite name that was used during the test run. The below syntax in your platforms: block nested under the driver: option should work. I haven't tested this by examining the instance during a run but hopefully I can find some time to do that soon. If it doesn't work let me know and we can tweak it until it does.
platforms:
- name: ubuntu
driver:
tags:
Name: test-kitchen-%{platform}-%{suite}
How this should work is that the .kitchen.yml file gets run through an ERB pre-processor so the %{platform} resolves to an instance variable during the loop across the platforms and suites arrays.
As far as I can tell there seems to be no straightforward way to include instance properties in the kitchen YAML. I added the following snippet to my kitchen.yml to check what is available in the kitchen YAML's ERB namespace:
<%
puts "Instance vars: #{instance_variables}"
puts "Local vars: #{local_variables}"
puts "Global vars: #{global_variables}"
puts "Methods: #{methods}"
%>
The results when running kitchen create for a specific instance were disappointing, containing nothing that looks like instance specification data:
Instance vars: []
Local vars: [:_erbout, :spec, :bin_file]
Global vars: [:$-0, :$\, :$DEBUG, :$-W, :$0, :$-d, :$-p, :$PROGRAM_NAME, :$:, :$-I, :$LOAD_PATH, :$", :$LOADED_FEATURES, :$,, :$/, :$INPUT_LINE_NUMBER, :$-l, :$-a, :$INPUT_RECORD_SEPARATOR, :$ORS, :$OUTPUT_RECORD_SEPARATOR, :$PROCESS_ID, :$NR, :$#, :$!, :$DEFAULT_INPUT, :$PID, :$PREMATCH, :$CHILD_STATUS, :$LAST_MATCH_INFO, :$LAST_READ_LINE, :$DEFAULT_OUTPUT, :$MATCH, :$fileutils_rb_have_lchown, :$POSTMATCH, :$LAST_PAREN_MATCH, :$IGNORECASE, :$ARGV, :$fileutils_rb_have_lchmod, :$stdin, :$stdout, :$stderr, :$>, :$<, :$., :$FILENAME, :$-i, :$*, :$SAFE, :$thor_runner, :$_, :$~, :$;, :$-F, :$?, :$$, :$ERROR_INFO, :$&, :$`, :$', :$+, :$=, :$KCODE, :$-K, :$ERROR_POSITION, :$FS, :$FIELD_SEPARATOR, :$OFS, :$OUTPUT_FIELD_SEPARATOR, :$RS, :$VERBOSE, :$-v, :$-w]
Methods: [:inspect, :to_s, :to_yaml, :to_json, :instance_variable_defined?, :remove_instance_variable, :instance_of?, :kind_of?, :is_a?, :tap, :methods, :instance_variable_set, :protected_methods, :instance_variables, :instance_variable_get, :private_methods, :public_methods, :method, :define_singleton_method, :public_send, :singleton_method, :public_method, :extend, :to_enum, :enum_for, :<=>, :===, :=~, :!~, :eql?, :respond_to?, :freeze, :object_id, :send, :display, :class, :nil?, :hash, :dup, :singleton_class, :clone, :then, :itself, :yield_self, :untaint, :taint, :tainted?, :untrusted?, :trust, :frozen?, :untrust, :singleton_methods, :equal?, :!, :__id__, :==, :instance_exec, :!=, :instance_eval, :__send__]
The local variable spec looked hopeful at first, but turned out to be a GemSpec object.
All things considered, you will probably have to create a convention to always specify the instance in some external way. You could use for example an environment variable of your choice, which you could then access in the template as <%= ENV['<VARNAME>'] %> (where you replace <VARNAME> with the name of your environment variable). There are probably other ways of getting the information in there, but you will still have to specify it in more places than just the Test Kitchen command.