I have the metric below and I want to drop the label "exported_namespace="test" and I am using the prometheus relabel_config but I'm not sure if the config will work properly:
"kube_pod_status_ready{condition="false", env="test", exported_namespace="test", instance="10.69.19.17:8080", job="kube-state-metrics", namespace="test", pod="test-1-deploy", uid="1asdadasaas"}
prometheus scrape config
- source_labels = [exported_namesapce]
separator: ,
action: labeldrop
regex: (.*)
replacement: $1
You can do:
writeRelabelConfigs:
- regex: exported_namespace
action: labeldrop
OR
writeRelabelConfigs:
- action: labeldrop
regex: exported_namespace
Please note that you must use metric_relabel_configs instead of relabel_configs if you want apply relabeling on the collected metrics. See this article for details.
If you want to drop a label with a particular value from the collected metric, then use the following relabeling rule at metric_relabel_configs section of the needed scrape_config:
- source_labels: [exported_namespace]
regex: test
target_label: exported_namespace
replacement: ""
This relabeling rule substitutes the exported_namespace="test" label with exported_namespace="" label, which, in turn, is automatically removed by Prometheus, since it contains an empty label value. You can play with this relabeling rule at this page.
If you need just dropping the exported_namespace label with any value, then use the following relabeling rule:
- action: labeldrop
regex: exported_namespace
Note that this rule will drop any value for exported_namespace label. For example, both exported_namespace="test" and exported_namespace="foo" will be dropped.
Related
I'm trying to create two labels(/values) to filter my logs on: warning and error. For graphing and logline panels.
I'm thinking log_level: warning or log_level: error. However, log_error: and log_warning: would also work. But with the code below, grafana/loki groups amd distinguishes my label values based on all the different variations.
- match:
selector: '{job="varlogs"}'
stages:
- regex:
expression: '.*(?P<log_error>(error|Error|ERROR)).*'
- labels:
log_error:
- match:
selector: '{job="varlogs"}'
stages:
- regex:
expression: '.*(?P<log_warning>(warn|Warn|WARN|warning|Warning|WARNING)).*'
- labels:
log_warning:
This works on the Loki side: {host="$host", filename=~"$log_type"} |~"(?i)error". But I prefer them straight as labels before they come in.
Anybody got tips to force lowercase (on the promtail side)?
If I'm reading the question correctly, the reason the sample config produces labels with varying case is that it uses dynamic labels taken from a named group in the regex. Consequently, if a log line contained (e.g.) "Error", it would have a log_error label "Error" (rather than "error", which is desired). Instead of a dynamic label, you should be able to use a static label. Additionally, it should be possible to add the case-insensitive flag to the patterns so that the case variants don't need to be specified. Perhaps something like:
- match:
selector: '{job="varlogs"}'
stages:
- regex:
expression: '(?i).*\berror\b.*'
- labels:
log_level: error
- match:
selector: '{job="varlogs"}'
stages:
- regex:
expression: '(?i).*\bwarn(ing)\b.*'
- labels:
log_level: warning
This could then be queried with:
{host="$host", filename=~"$log_type", log_level="warning"}
Alternatively, instead of testing for a specific label value, the presence of the label itself can be tested for by matching it against a .+ regex (this is suggested by the Prometheus querying documentation, which is referenced by the Grafana Loki log stream selector documentation). With the config in the question, you'd use:
{host="$host", filename=~"$log_type", log_error=~".+"}
Caveat: untested. I'm not a Grafana Loki user, nor do I have access to a server.
I'm trying to use a regex comparisons in the workflow:rules section of my gitlab-ci file, but it doesn't seem to be working. Here is a basic version:
stages:
- prep
variables:
VAR1: "no value"
APPURL: "no value"
workflow: #Goal: only run pipeline for push events set some variables based on branch/commit_ref_name
rules:
- if: "CI_PIPELINE_SOURCE == "push"
- if: "CI_COMMIT_REF_NAME =~ /^dev/
variables:
VAR1: "Dev Value"
APPURL: "https://devurl.com"
test_job:
stage: prep
image: runner.image/url
script:
- echo "$VAR1"
- echo "APPURL"
When push a change from a branch named something like "dev1-jirastory", the test job output says "no value" for both variables. So, its not catching the commit_ref_name rule for some reason.
Can someone tell me if you can use regex comparisons in the workflow:rules statements? All the stuff I've found so far refers to job rules. As I want these variables set for multiple jobs, I want to set them for the entire workflow and subsequent jobs, not use the same rules in every single job, which can grow and not be managable.
I did try accomplishing those value determinations in a root "before_script" section, but that gets overwritten if I need to have to do other actions in a before_script for any individual job, so that won't work for me either.
Lastly, if anyone can tell me if I can do any "command" statements for parsing the commit_ref_name, that would be great. I'd love to do something like:
"$CI_COMMIT_REF_NAME" | awk -F "-" '{print $1}')
to pull out the "dev1" portion of the ref name like my sample above for use in jobs as well.
Thanks in advance.
Using variables in workflow works, just a couple of small changes needed for your pipeline:
stages:
- prep
variables:
VAR1: "no value"
APPURL: "no value"
workflow: #Goal: only run pipeline for push events set some variables based on branch/commit_ref_name
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_COMMIT_REF_SLUG =~ /^dev/'
variables:
VAR1: "Dev Value"
APPURL: "https://devurl.com"
test_job:
stage: prep
script:
- echo "$VAR1"
- echo "$APPURL"
You where missing the $ before the variables in you rules clauses. And I would recommend to use $CI_COMMIT_REF_SLUG instead of CI_COMMIT_REF_NAME to be independent of the case.
Here is a cloudformation template that works as expected.
https://github.com/shantanuo/cloudformation/blob/master/updated/so2.tpl.txt
But when I change the last line to something like this...
/home/ec2-user/mysecret.txt`'' --valid-ips !Ref MyIpAddress >
It silently ignores the command. Is there any other way to substitute the MyIpAddress variable?
Instead of using 'Fn::Join' you can use 'Fn::Sub'. This will make your template more readable as won't have to break your script inot multiple lines and you can reference MyIpAddress as ${MyIpAddress}.
I do not know how and why does it work. But this is what I was looking for.
- >-
/usr/local/bin/aws-ec2-assign-elastic-ip --access-key ''`cat
/home/ec2-user/myaccesskey.txt`'' --secret-key ''`cat
/home/ec2-user/mysecret.txt`'' --valid-ips '
- !Ref MyIpAddress
- |
'
Thanks to Pat's comment!
I have set up Curator to delete old Elasticsearch indexes via this filter:
(...)
filters:
- filtertype: pattern
kind: regex
value: '^xyz-us-(prod|preprod)-(.*)-'
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 7
exclude:
(...)
However, I realized that Curator uses non-greedy regexes, because this filter catches the index xyz-us-prod-foo-2018.10.11 but not xyz-us-prod-foo-bar-2018.10.11.
How can I modify the filter to catch both indexes?
The answer I gave at https://discuss.elastic.co/t/use-greedy-regexes-in-curator-filter/154200 is still good, though you somehow weren't able to get the results I posted there. Anchoring the end and specifying the date regex worked for me: '^xyz-us-(prod|preprod)-.*-\d{4}\.\d{2}\.\d{2}$'
I created these indices:
PUT xyz-us-prod-foo-2018.10.11
PUT xyz-us-prod-foo-bar-2018.10.11
PUT xyz-us-preprod-foo-2018.10.12
PUT xyz-us-preprod-foo-bar-2018.10.12
And ran with this config:
---
actions:
1:
action: delete_indices
filters:
- filtertype: pattern
kind: regex
value: '^xyz-us-(prod|preprod)-.*-\d{4}\.\d{2}\.\d{2}$'
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 7
The results are fully matched:
2018-10-29 20:08:28,120 INFO curator.utils show_dry_run:928 DRY-RUN: delete_indices: xyz-us-preprod-foo-2018.10.12 with arguments: {}
2018-10-29 20:08:28,120 INFO curator.utils show_dry_run:928 DRY-RUN: delete_indices: xyz-us-preprod-foo-bar-2018.10.12 with arguments: {}
2018-10-29 20:08:28,120 INFO curator.utils show_dry_run:928 DRY-RUN: delete_indices: xyz-us-prod-foo-2018.10.11 with arguments: {}
2018-10-29 20:08:28,120 INFO curator.utils show_dry_run:928 DRY-RUN: delete_indices: xyz-us-prod-foo-bar-2018.10.11 with arguments: {}
Curator's implementation of the Regex engine is using the U (Ungreedy) flag.
Ungreedy regexes make star quantifiers lazy by default, adding a "?" modifier under the Ungreedy option would turn it back to Greedy.
Try adding a '?' after the '.*' in your regex
'^xyz-us-(prod|preprod)-(.*?)-'
I'm trying to write a syntax definition for Gradle in Sublime Text 3. Many pieces of a Gradle build file are really just Groovy and so I'm trying to take advantage of the current Groovy highlighting support by using include. Thus far this is working fairly well, by I'm stuck on how to apply it to a particular piece.
Here is the Gradle snippet I am trying to highlight:
task copyTask (group: 'Install NGA - deploy', type: Copy, dependsOn: 'whoCares') {
from 'resources'
into 'target'
include('**/*.txt')
}
And this is the syntax I'm using to match that snippet:
- name: copy.task.source.gradle
begin: '\s*(task)\s+(\w+)\s*\((.*type: Copy.*)\)\s*{'
comment: 'Copy task definition'
beginCaptures:
'1': {name: keyword.task.source.gradle}
'2': {name: entity.name.function}
'3': {name: source.groovy}
end: '}'
contentName: copy.body.source.gradle
patterns:
- include: source.groovy
Most of this appears to work as intended. (Always hard to know with RegEx.) My problem is that the third capture. I want to apply all the rules contained in 'source.groovy' to the text between the parentheses and what I have above is not getting the job done.
To clarify: the text is "captured" and tagged as source.groovy, but that's not actually quite what I want. I don't want it simply tagged as source.groovy, I want the rules from source.groovy to be used when evaluating the text. The last line of my example successfully does this to the "content" section (text in between the braces) but simply putting include does not work.
'3': {include: source.groovy} # This gets an error.
If there is a syntax to apply include directly to a capture I can't find it, and I can't figure out another technique. Maybe something that has nested begin and end tags?
If I am understanding this correctly you would like the third capture group source.groovy to match the group: 'Install NGA - deploy', type: Copy, dependsOn: 'whoCares' part of your example.
In that case you would just need to alter you expression to capture more of the string like so:
begin: '\s*(task)\s+(\w+)\s*\((.*type: Copy.*?)\)\s*{'