I have been trying to achive federation in my Prometheus setup. While doing this, I want to exclude some metrics to be scraped by my scraper Prometheus.
Here is my federation config:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'xxxxxxxx'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job!="kubernetes-nodes"}'
static_configs:
- targets:
- 'my-metrics-source'
As it can be seen from the config, I want to exclude any metric that has kubernetes-nodes job label, and retrieve the rest of the metrics. However, when I deploy my config, no metric is scraped.
Is it a bug in Prometheus or I simply misunderstood how the match params work?
If you really need to do this you need a primary vector selector which includes results.
Otherwise you'll get the error vector selector must contain at least one non-empty matcher.
So for example with these matchers you'll get what you are trying to achieve:
curl -G --data-urlencode 'match[]={job=~".+", job!="kubernetes-nodes"}' http://your-url.example.com/federate
As a safety measure to avoid you accidentally writing an instant vector that returns all the time series in your Prometheus, selectors must contain at least one matcher that does not match the empty string. Your selector has no such matcher (job!="kubernetes-nodes" matches an empty job label), so this is giving you an error.
You could add a selector such as __name__=~".+" however at a higher level this is an abuse of federation as it is not meant for pulling entire Prometheus servers. See https://www.robustperception.io/federation-what-is-it-good-for/
Related
I was trying to set up a load balancer to cache requests only when a query param is present in the request or a custom method on the API is called.
Either something like:
BASE_URL/api/v1/asset/UUID?static_only=true
or something like
BASE_URL/api/v1/asset/UUID:static
I've set up 2 NEG linked to the same cloud run service. One of them has the CDN active so that it can cache responses.
Since this API has some content that can require other calls to other services that are subject to change, I want to give the option to cache the request when the query param is present.
I tried to set up a url-map with the following configuration:
defaultService: DEFAULT_BACKEND
name: path-matcher-1
routeRules:
- matchRules:
- queryParameterMatches:
- exactMatch: 'true'
name: static_only
prefixMatch: /api/v1/asset/*
priority: 1
service: BACKEND-2
I couldn't find examples on the custom method, but the tries I had with the query param didn't work. Is my approach wrong?
I guess there are two parts on this question.
How do I configure the URL map to accept a path that ends with a dynamic value? (in this case the UUID of the resource) The * doesn't seem to work.
Is it possible to configure the URL map to use custom methods (AIP-136)?
We need to send large (very) amount of logs to Splunk server from only one k8s pod( pod with huge traffic load), I look at the docs and found this:
https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-a-logging-agent
However, there is a Note in the docs, that is stating about a significant resource consumption. Is there any other option to do it? I mean more efficient ? As these pods handle traffic and we cannot add the additional load, that can risk it stability...
There's an official solution to get Kubernets logs: Splunk Connect for Kubernetes. Under the hood it also uses fluentd for the logging part.
https://github.com/splunk/splunk-connect-for-kubernetes
You will find a sample config and a methodology to test it on microK8s first to get acquainted with the config and deployment: https://mattymo.io/deploying-splunk-connect-for-kubernetes-on-microk8s-with-helm/
And if you only want logs from a specific container you can use this section of the values file to select only logs from the container you're interested in:
fluentd:
# path of logfiles, default /var/log/containers/*.log
path: /var/log/containers/*.log
# paths of logfiles to exclude. object type is array as per fluentd specification:
# https://docs.fluentd.org/input/tail#exclude_path
exclude_path:
# - /var/log/containers/kube-svc-redirect*.log
# - /var/log/containers/tiller*.log
# - /var/log/containers/*_kube-system_*.log (to exclude `kube-system` namespace)
I need to configure a lambda via serverless.yml to use different provision concurrency for different environments. Below is my lambda configuration:
myLambda:
handler: src/lambdas
name: myLambda
provisionedConcurrency: ${self:custom.pc}
...
custom:
pc: ${env:PC}
The value PC is loaded from environment variable. It works for values greater than 0 but I can't set a value 0 in one environment. What I want to do is to disable provision concurrency in dev environment.
I have read through this doc https://forum.serverless.com/t/conditional-serverless-yml-based-on-stage/1763/3 but it doesn't seem to help in my case.
How can I set provisionedConcurrency conditional based on environment?
Method 1: Stage-based variables via default values
This is a fairly simple trick by using a cascading value variable. The first value is the one you want, the second one being a default, or fallback value. Also called cascading variables.
// serverless.yml
provider:
stage: "dev"
custom:
provisionedConcurrency:
live: 100
staging: 50
other: 10
myLambda:
handler: src/lambdas
name: myLambda
provisionedConcurrency: ${self:custom.provisionedConcurrency.${self:provider.stage}, self:custom.provisionedConcurrency.other}
This above with stage set to dev will default to "other" value of 10, but if you set stage via serverless deploy --stage live then it will use the live value of 100.
See here for more details: https://www.serverless.com/framework/docs/providers/aws/guide/variables#syntax
Method 2: Asynchonous Value via Javascript
You can use an js include and put your conditional logic there. It's called "asynchronous value support". Basically, this allows you to put logic in a javascript file which you include and it can return different values depending on various things (like, what AWS account you're on, or if certain variables are set, or whatever). Basically, it allows you to do this...
provisionedConcurrency: ${file(./detect_env.js):get_provisioned_concurrency}
Which works if you create a javascript file in this folder called detect_env.js, and it has the contents similar to...
module.exports.get_provisioned_concurrency = () => {
if ("put logic to detect which env you are deploying to, eg for live") {
return Promise.resolve('100');
} else {
// Otherwise fallback to 10
return Promise.resolve('10');
}
}
For more info see: https://www.serverless.com/framework/docs/providers/aws/guide/variables#with-a-new-variables-resolver
I felt I had to reply here even though this was asked months ago because none of the answers were even remotely close to the right answer and I really felt sorry for the author or anyone who lands here.
For really sticky problems, I find it's useful to go to the Cloudformation script instead and use the Cloudformation Intrinsic Functions.
For this case, if you know all the environments you could use Fn::FindInMap
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-findinmap.html
Or if it's JUST production which needs 0 then you could use the conditional Fn::If and a boolean Condition test in the Cloudformation template to test if environment equals production, use 0, else use the templated value from SLS.
Potential SLS:
resources:
Conditions:
UseZero: !Equals ["production", ${provider.stage}]
Resources:
myLambda:
ProvisionedConcurrency: !If [UseZero, 0, ${self:custom.pc}]
You can explicitly remove the ProvisionedConcurrency property as well if you want:
resources:
Conditions:
UseZero: !Equals ["production", ${provider.stage}]
Resources:
myLambda:
ProvisionedConcurrency: !If [UseZero, AWS::NoValue, ${self:custom.pc}]
Edit: You can still use SLS to deploy; it simply compiles into a Cloudformation JSON template which you can explicitly modify with the SLS resources field.
The Serverless Framework provides a really useful dashboard tool with a feature called Parameters. Essentially what it lets you do is connect your service to it then you can set different values for different stages and then use those values in your serverless.yml with syntax like ${param:VARAIBLE_NANE_HERE} and it gets replaced at deploy time with the right value for whatever stage you are currently deploying. Super handy. There are also a bunch of other features in the dashboard such as monitoring and troubleshooting.
You can find out more about Parameters at the official documentation here: https://www.serverless.com/framework/docs/guides/parameters/
And how to get started with the dashboard here: https://www.serverless.com/framework/docs/guides/dashboard#enabling-the-dashboard-on-existing-serverless-framework-services
Just using a variable with a null value for dev environments during on deploy/package and SLS will skip this property:
provisionedConcurrency: ${self:custom.variables.provisionedConcurrency}
We are using AWS Elasticsearch for logs. The logs are streamed via Logstash continuously. What is the best way to periodically remove the old indexes?
I have searched and various approaches recommended are:
Use lambda to delete old indexes - https://medium.com/#egonbraun/periodically-cleaning-elasticsearch-indexes-using-aws-lambda-f8df0ebf4d9f
Use scheduled docker containers - http://www.tothenew.com/blog/running-curator-in-docker-container-to-remove-old-elasticsearch-indexes/
These approaches seem like an overkill for such a basic requirement as "delete indexes older than 15 days"
What is the best way to achieve that? Does AWS provide any setting that I can tweak?
Elasticsearch 6.6 brings a new technology called Index Lifecycle Manager See here. Each index is assigned a lifecycle policy, which governs how the index transitions through specific stages until they are deleted.
For example, if you are indexing metrics data from a fleet of ATMs into Elasticsearch, you might define a policy that says:
When the index reaches 50GB, roll over to a new index.
Move the old index into the warm stage, mark it read only, and shrink it down to a single shard.
After 7 days, move the index into the cold stage and move it to less expensive hardware.
Delete the index once the required 30 day retention period is reached.
The technology is in beta stage yet, however is probably the way to go from now on.
Running curator is pretty light and easy.
Here you can find a Dockerfile, config and action-file.
https://github.com/zakkg3/curator
Also, Curator can help you if you need to (among others):
Add or remove indices (or both!) from an alias
Change shard routing allocation
Delete snapshots
Open closed indices
forceMerge indices
reindex indices, including from remote clusters
Change the number of replicas per shard for indices
rollover indices
Take a snapshot (backup) of indices
Restore snapshots
https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html
Here is a typical action file for delete indices older than 15 days:
actions:
1:
action: delete_indices
description: >-
Delete indices older than 15 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
disable_action: True
filters:
- filtertype: pattern
kind: prefix
value: logstash-
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 15
I followed the elasticsearch-curator documentation to install the package:
https://www.elastic.co/guide/en/elasticsearch/client/curator/current/pip.html
Then I used the AWS base example of how to automate the indexes cleanup using the signed based authentication provided by requests_aws4auth package:
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/curator.html
It worked like a charm.
You can decide to run this inside a lambda, docker or include it in your own DevOps cli.
Deployment task '[5.1.4] **Configure Redis, service bus and Update Databases and Samples**'
with id '04d8e453-7f22-420d' and with scenario_id '9349bff9-9e41-4c26-9a90'
Given the above text, I need a regex which should give this output:
Configure Redis, service bus and Update Databases and Samples
Assuming that the text ends with a single quote, this is the regex you are looking for:
Deployment task '\[.*?\]\s*([^']+)
And here is an example how you can grab the value:
[regex]::Match($yourString, "Deployment task '\[.*?\]\s*([^']+)").Groups[1].Value