How to apply lifecycle patters in AWS elasticsearch to many indexs - amazon-web-services

I am trying to do this in AWS elasticsearch, whereby I create a template for the pattern application-logs-*, and then I want to apply a index policy log-rotation-policy for all indexes which match that expression. I have created my policy successfully, but when I try to create a template like so:
PUT _template/application-logs
{
"index_patterns" : [
"application-logs-*"
],
"settings" : {
"index.lifecycle.name": "log-rotation-policy",
}
}
I get an error:
"type": "illegal_argument_exception",
"reason": "unknown setting [index.policy_id] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
The AWS documentation is extremely vague,

Ok sorry I thought I would post this answer anyway because as I was writing this I figured out the problem, the correct key o use is: opendistro.index_state_management.policy_id so it should be:
PUT _template/application-logs
{
"index_patterns" : [
"application-logs-*"
],
"settings" : {
"opendistro.index_state_management.policy_id": "log-rotation-policy",
}
}
I found the answer here.

Related

Cloudformation complains "array items are not unique"

The cloudformation template is valid, but giving an error which seems to be undocumented.
Properties validation failed for resource MoodleRDS with message: #/VPCSecurityGroups: array items are not unique
Code is uploaded at github because it was too long for the post.
https://github.com/rkhyd/MoodleQuickStart/blob/main/MoodleQuickStartv2.json
Any ideas on how to fix?
I checked in the documentation, but could not find any such reference.
Thanks in advance
The issue is actually pretty clear. It says that you have values in your security groups list that are not unique. Here's what you put :
"VPCSecurityGroups": [
{
"Fn::GetAtt": [
"DBSecGroup",
"GroupId"
]
},
{
"Ref": "DBSecGroup"
}
],
What you did here is, you put 2 values in your list, but they are actually the same value... Fn:GetAtt will give you the DBSecGroup id, and the same will happen with "Ref":"DBSecGroup" which means that effectively, you put 2 values that are exactly the same in the list. Remove one of those and it will be ok.

How to automate the creation of elasticsearch index patterns for all days?

I am using cloudwatch subscription filter which automatically sends logs to elasticsearch aws and then I use Kibana from there. The issue is that everyday cloudwatch creates a new indice due to which I have to manually create the new index pattern each day in kibana. Accordingly I will have to create new monitors and alerts in kibana as well each day. I have to automate this somehow. Also if there is better option with which I can go forward would be great. I know datadog is one good option.
Typical work flow will look like this (there are other methods)
Choose a pattern when creating an index. Like staff-202001, staff-202002, etc
Add each index to an alias. Like staff
This can be achieved in multiple ways, easiest is to create a template with index pattern , alias and mapping.
Example: Any new index created matching the pattern staff-* will be assigned with given mapping and attached to alias staff and we can query staff instead of individual indexes and setup alerts.
We can use cwl--aws-containerinsights-eks-cluster-for-test-host to run queries.
POST _template/cwl--aws-containerinsights-eks-cluster-for-test-host
{
"index_patterns": [
"cwl--aws-containerinsights-eks-cluster-for-test-host-*"
],
"mappings": {
"properties": {
"id": {
"type": "keyword"
},
"firstName": {
"type": "text"
},
"lastName": {
"type": "text"
}
}
},
"aliases": {
"cwl--aws-containerinsights-eks-cluster-for-test-host": {}
}
}
Note: If unsure of mapping, we can remove mapping section.

Get AWS Reservation Utilization by custom tag

I am currently assigning AWS media-live channels to a specific group by a custom tag and want to get the (CostExplorer) GetReservationUtilization for a group's channels by filtering by tag. The AWS documentation for GetReservationUtilization lists the Filtering options as:
"Filter": {
.
.
"Tags": {
"Key": "string",
"MatchOptions": [ "string" ],
"Values": [ "string" ]
}
.
.
}
I interpret it as it should be possible to sort by a custom set tag via:
"Key": "Group",
"Value": [customId]
But I get an error that says "An error occurred (ValidationException) when calling the GetReservationUtilization operation: Tags expression is not allowed, allowed expression(s): And, Not, Dimensions"
Feels like I have tried everything possible but I cant seem to get it to work.
Have you looked at the examples boto3 documentation?
Seems you may need to wrap the tag element inside of the And, Not or supply dimensions
For anyone coming here in the future, sorting reservation-utilization by the tag dimension is currently not supported. The following dimensions are supported:
AZ
CACHE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
SCOPE
TENANCY
As specified in the AWS API docs.

Issues with regex in Kibana

I am having a hard time using a regex pattern inside Kibana/Elasticsearch version 6.5.4. The field I am searching for has the following mapping:
"field": {
"type": "text",
"analyzer": "custom_analyzer"
},
Regex searches in this field return several hits when requested straight to elasticsearch:
GET /my_index/_search
{
"query": {
"regexp":{
"field": "abc[0-9]{4}"
}
}
}
On the other hand, in Kibana's discover/dashboard pages all queries below return empty:
original query - field:/abc[0-9]{4}/
scaped query - field:/abc\[0\-9\]\{4\}/
desperate query - field:/.*/
Inspecting the request done by kibana to elasticsearch reveals the following query:
"query": {
"bool": {
"must": [
{
"query_string": {
"query": "field:/abc[0-9]{4}/",
"analyze_wildcard": true,
"default_field": "*"
}
}
I expected kibana to understand the double forward slash syntax /my_query/ and make a ´regexp query´ instead of a ´query_string´. I have tried this with both query languages: "lucene", "kuery" and with the optional "experimental query features" enabled/disabled.
Digging further I found this old issue which says that elastic only runs regex into the now deprecated _all field. If this still holds true I am not sure how regex work in kibana/elastic 6.X.
What am I missing? Any help in clarifying the conditions to use regex in Kibana would be much appreciated
All other stack questions in this subject are either old or were related to syntax issues and/or lack of understanding of how the analyzer deals with whitespaces and did not provide me any help.
So I don't exactly have the answer on how to make Lucene work with Regexp search in Kibana. But I figured out a way to do this in Kibana.
Solution is to use Filter with custom DSL
Here is an example of what to put in Query JSON -
{
"regexp": {
"req.url.keyword": "/question/[0-9]+/answer"
}
}
Example Url I have in my data - /questions/432142/answer
Additional to this, you can write more filters using Kibana search (Lucene syntax)
It does the appropriate search, no escaping issue or any such thing.
Hope it helps.
Make sure Kibana hasn't got query feature turned on in top right.

Defined Template from logstash not being used by elastic search for mapping

I have the following logstash output config to go into elasticsearch from a postgres database
https://pastebin.com/BFCH3tuZ
I have defined the location and my template as the following:
https://pastebin.com/mK5qshKM
When I run logstash I see the output as follows:
[2017-05-24T20:54:10,828][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-05-24T20:54:10,982][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0xff97ab URL:http://localhost:9200/>}
[2017-05-24T20:54:10,985][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"/etc/logstash/universe_template.json"}
[2017-05-24T20:54:11,045][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"universe_elastic", "settings"=>{"analysis"=>{"filter"=>{"gr$
[2017-05-24T20:54:11,052][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/universe_elastic
[2017-05-24T20:54:11,145][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0xe60519 URL://localhost:9200$
[2017-05-24T20:54:11,154][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inf$
[2017-05-24T20:54:11,988][INFO ][logstash.pipeline ] Pipeline main started
[2017-05-24T20:54:12,079][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-05-24T20:54:12,108][INFO ][logstash.inputs.jdbc ] (0.101000s) select planet.id, planet.x || ':' || planet.y || ':' || planet.z coords, planet.x, planet.y, planet.z ,planetname,ru$
[2017-05-24T20:54:15,006][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
When I query elasticseach templates I can see my template listed at:
http://xxxx:9200/_template/ { "universe_elastic": {
"order": 0,
"template": "universe_elastic",
"settings": {
"index": {
"analysis": {
"filter": {
"gramFilter": {
"token_chars": [
"letter",
"digit",
"punctuation",
"symbol"
], ETC ETC ETC......
However when I run a check on my "universe" index the mapping haven't come through:
https://pastebin.com/hw9hYfLn
I would expect to see the _all field and the include in all references set to true/false. But nothing.. Also the queries do not then use the analyzers I have specified.
Any ideas what might be going wrong here? I have deleted out all the other possible templates created as well as re-created indexes etc.
You've done it almost all correctly, you just need to change a single thing:
In your template, this line
"template": "universe_elastic",
should read
"template": "universe",
ES is going to apply the apply only if your index name matches with the template name.