Elastic Search Index template for NGINX custom log - templates

I have the following log from NGINX:
111.111.111.111, 11.11.11.11 - 11.11.11.11 [06/May/2016:08:26:10 +0000] "POST /some-service/GetSomething HTTP/1.1" 499 0 "-" "Jakarta Commons-HttpClient/3.1" "7979798797979799" 59.370 - "{\x0A\x22correlationId\x22 : \x22TestCorr1\x22\x0A}"
Logstash will be like this:
input { stdin {} }
output { stdout { codec => "rubydebug" } }
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG} %{QS:partner_id} %{NUMBER:req_time} %{GREEDYDATA:extra_fields}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
mutate {
gsub => ["extra_fields", "\"","",
"extra_fields", "\\x0A","",
"extra_fields", "\\x22",'\"',
"extra_fields", "(\\)",""
]
}
json {
source => "extra_fields"
target => "extra_fields_json"
}
mutate {
add_field => {
"correlationId" => "%{[extra_fields_json][correlationId]}"
}
}
}
The problem is req_time is string, so I need to convert to float using the following template:
{
"template" : "filebeat*",
"settings" : {
"index.refresh_interval" : "5s"
},
"mappings" : {
"properties" : {
"#timestamp": { "type": "date" },
"partner_id": { "type": "string", "index": "not_analyzed" },
"#version": { "type": "string", "index": "not_analyzed" },
"req_time" : { "type" : "float", "index" : "not_analyzed" },
"res_time" : { "type" : "string", "index" : "not_analyzed" },
"purchaseTime" : { "type" : "date", "index" : "not_analyzed" },
"received_at" : { "type" : "date", "index" : "not_analyzed" },
"itemPrice" : { "type" : "double", "index" : "not_analyzed" },
"total" : { "type" : "integer", "index" : "not_analyzed" },
"bytes" : { "type" : "double", "index" : "not_analyzed" }
}
}
}
}
Verified using:
curl -XGET 'http://localhost:9200/filebeat-2016.06.30/_mapping/field/req_time'
I am getting:
{"filebeat-2016.06.30":{"mappings":{"nginxlog":{"req_time": {"full_name":"req_time","mapping":{"req_time":{"type":"string"}}}}}}}
so my template definitely does not work. Anyone can help?

At the end, I just removed the template, and let ES guest the field type. It did work.

Related

x-amazon-apigateway-integration get lambda arn for uri property

The x-amazon-apigateway-intregration extension has a URI property that takes an arn value of the backend. When I try to deploy this with SAM I get the following error
Unable to parse API definition because of a malformed integration at path /auth/signIn
How can I get the ARN value of my lambda function that is specified in my template?
Template:
"api" : {
"Type" : "AWS::Serverless::Api",
"Properties" : {
"StageName" : "prod",
"DefinitionUri" : "src/api/swagger.json"
}
},
"signIn": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "index.signIn",
"Runtime": "nodejs8.10",
"CodeUri": "./src",
"FunctionName": "signIn",
"ReservedConcurrentExecutions": 15,
"Timeout": 5,
"Events": {
"SignIn": {
"Type": "Api",
"Properties": {
"Path": "/signIn",
"Method": "post",
"RestApiId" : {
"Ref": "api"
}
}
}
}
}
},
Swagger Definition:
"paths" : {
"/auth/signIn" : {
"post" : {
"x-amazon-apigateway-integration" : {
"httpMethod" : "POST",
"type" : "aws_proxy",
"uri" : {
"Fn::Sub" : {
"Fn::GetAtt": [
"signIn",
"Arn"
]}
}
},
"parameters" : [
{
"name" : "email",
"in" : "body",
"required" : "true",
"schema" : {
"type" : "string"
}
},
{
"name" : "password",
"in" : "body",
"required" : "true",
"schema" : {
"type" : "string"
}
}
],
"responses" : {
"200" : {
"description" : "Authenticated"
},
"default" : {
"description" : "Unexpected Error"
}
}
}
},

elasticsearch v5 template to v6

I am currently running elasticsearch cluster version 6.3.1 on AWS and here is template file which I need to upload but can't
```
{
"template" : "logstash-*",
"settings" : {
"index.refresh_interval" : "5s"
},
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true, "omit_norms" : true},
"dynamic_templates" : [ {
"message_field" : {
"match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fielddata" : { "format" : "enabled" }
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fielddata" : { "format" : "enabled" },
"fields" : {
"raw" : {"type": "string", "index" : "not_analyzed", "doc_values" : true, "ignore_above" : 256}
}
}
}
}, {
"float_fields" : {
"match" : "*",
"match_mapping_type" : "float",
"mapping" : { "type" : "float", "doc_values" : true }
}
}, {
"double_fields" : {
"match" : "*",
"match_mapping_type" : "double",
"mapping" : { "type" : "double", "doc_values" : true }
}
}, {
"byte_fields" : {
"match" : "*",
"match_mapping_type" : "byte",
"mapping" : { "type" : "byte", "doc_values" : true }
}
}, {
"short_fields" : {
"match" : "*",
"match_mapping_type" : "short",
"mapping" : { "type" : "short", "doc_values" : true }
}
}, {
"integer_fields" : {
"match" : "*",
"match_mapping_type" : "integer",
"mapping" : { "type" : "integer", "doc_values" : true }
}
}, {
"long_fields" : {
"match" : "*",
"match_mapping_type" : "long",
"mapping" : { "type" : "long", "doc_values" : true }
}
}, {
"date_fields" : {
"match" : "*",
"match_mapping_type" : "date",
"mapping" : { "type" : "date", "doc_values" : true }
}
}, {
"geo_point_fields" : {
"match" : "*",
"match_mapping_type" : "geo_point",
"mapping" : { "type" : "geo_point", "doc_values" : true }
}
} ],
"properties" : {
"#timestamp": { "type": "date", "doc_values" : true },
"#version": { "type": "string", "index": "not_analyzed", "doc_values" : true },
"geoip" : {
"type" : "object",
"dynamic": true,
"properties" : {
"ip": { "type": "ip", "doc_values" : true },
"location" : { "type" : "geo_point", "doc_values" : true },
"latitude" : { "type" : "float", "doc_values" : true },
"longitude" : { "type" : "float", "doc_values" : true }
}
}
}
}
}
}'
I tried loading the template via Dev Tools in Kibana and got the following error
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [_default_]: No field type matched on [float], possible values are [object, string, long, double, boolean, date, binary]"
}
],
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [_default_]: No field type matched on [float], possible values are [object, string, long, double, boolean, date, binary]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "No field type matched on [float], possible values are [object, string, long, double, boolean, date, binary]"
}
},
"status": 400
}
Can somebody please help with what I need to do to have this working on version 6 elasticsearch. I am completely new to elasticsearch and am just looking to setup logging from cloudtrail -> s3 -> AWS elasticsearch -> kibana.
In order to work on 6.3, the correct mapping for the logstash index would need to be (taken from here):
{
"template" : "logstash-*",
"version" : 60001,
"settings" : {
"index.refresh_interval" : "5s"
},
"mappings" : {
"_default_" : {
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"#timestamp": { "type": "date"},
"#version": { "type": "keyword"},
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}
}

elasticsearch showing only 1 docs.count on data migration using logstash

I am trying to move data from S3 (.csv file's data) to elastic search cluster using logstash using custom templete.
But it only shows docs.count=1 and rest of the records as docs.deleted when i check using following query in Kibana:-
GET /_cat/indices?v
My first question is :-
why only one record [the last one] is transmitted and others are transmitted as deleted ?
Now when I query this index using below query in Kibana :-
GET /my_file_index/_search
{
"query": {
"match_all": {}
}
}
I get only one record with comma separated data in "message" : field, So the second question is :-
How can I get the data with column names just like in csv as I have specified all column mappings in my template file which is fed into logstash ?
I tried giving columns field in logstash csv filter also but no luck.
columns => ["col1", "col2",...]
Any help would be appreciated.
EDIT-1: below is my logstash.conf file:-
input {
s3{
access_key_id => "xxx"
secret_access_key => "xxxx"
region => "eu-xxx-1"
bucket => "xxxx"
prefix => "abc/stocks_03-jul-2018.csv"
}
}
filter {
csv {
separator => ","
columns => ["AAA","BBB","CCC"]
}
}
output {
amazon_es {
index => "my_r_index"
document_type => "my_r_index"
hosts => "vpc-totemdev-xxxx.eu-xxx-1.es.amazonaws.com"
region => "eu-xxxx-1"
aws_access_key_id => 'xxxxx'
aws_secret_access_key => 'xxxxxx+xxxxx'
document_id => "%{id}"
template => "templates/template_2.json"
template_name => "my_r_index"
}
}
Note:
Version of logstash : 6.3.1
Version of elasticsearch : 6.2
EDIT:-2 Adding template_2.json file along with sample csv header :-
1. Mapping file :-
{
"template" : "my_r_index",
"settings" : {
"index" : {
"number_of_shards" : 50,
"number_of_replicas" : 1
},
"index.codec" : "best_compression",
"index.refresh_interval" : "60s"
},
"mappings" : {
"_default_" : {
"_all" : { "enabled" : false },
"properties" : {
"SECURITY" : {
"type" : "keyword"
},
"SERVICEID" : {
"type" : "integer"
},
"MEMBERID" : {
"type" : "integer"
},
"VALUEDATE" : {
"type" : "date"
},
"COUNTRY" : {
"type" : "keyword"
},
"CURRENCY" : {
"type" : "keyword"
},
"ABC" : {
"type" : "integer"
},
"PQR" : {
"type" : "keyword"
},
"KKK" : {
"type" : "keyword"
},
"EXPIRYDATE" : {
"type" : "text",
"index" : "false"
},
"SOMEID" : {
"type" : "double",
"index" : "false"
},
"DDD" : {
"type" : "double",
"index" : "false"
},
"EEE" : {
"type" : "double",
"index" : "false"
},
"FFF" : {
"type" : "double",
"index" : "false"
},
"GGG" : {
"type" : "text",
"index" : "false"
},
"LLL" : {
"type" : "double",
"index" : "false"
},
"MMM" : {
"type" : "double",
"index" : "false"
},
"NNN" : {
"type" : "double",
"index" : "false"
},
"OOO" : {
"type" : "double",
"index" : "false"
},
"PPP" : {
"type" : "text",
"index" : "false"
},
"QQQ" : {
"type" : "integer",
"index" : "false"
},
"RRR" : {
"type" : "double",
"index" : "false"
},
"SSS" : {
"type" : "double",
"index" : "false"
},
"TTT" : {
"type" : "double",
"index" : "false"
},
"UUU" : {
"type" : "double",
"index" : "false"
},
"VVV" : {
"type" : "text",
"index" : "false"
},
"WWW" : {
"type" : "double",
"index" : "false"
},
"XXX" : {
"type" : "double",
"index" : "false"
},
"YYY" : {
"type" : "double",
"index" : "false"
},
"ZZZ" : {
"type" : "double",
"index" : "false"
},
"KNOCKORWARD" : {
"type" : "text",
"index" : "false"
},
"RANGEATSSPUT" : {
"type" : "double",
"index" : "false"
},
"STDATMESSPUT" : {
"type" : "double",
"index" : "false"
},
"CONSENSUPUT" : {
"type" : "double",
"index" : "false"
},
"CLIENTLESSPUT" : {
"type" : "double",
"index" : "false"
},
"KNOCKOUESSPUT" : {
"type" : "text",
"index" : "false"
},
"RANGACTOR" : {
"type" : "double",
"index" : "false"
},
"STDDACTOR" : {
"type" : "double",
"index" : "false"
},
"CONSCTOR" : {
"type" : "double",
"index" : "false"
},
"CLIENTOR" : {
"type" : "double",
"index" : "false"
},
"KNOCKOACTOR" : {
"type" : "text",
"index" : "false"
},
"RANGEPRICE" : {
"type" : "double",
"index" : "false"
},
"STANDARCE" : {
"type" : "double",
"index" : "false"
},
"NUMBERICE" : {
"type" : "integer",
"index" : "false"
},
"CONSECE" : {
"type" : "double",
"index" : "false"
},
"CLIECE" : {
"type" : "double",
"index" : "false"
},
"KNOCICE" : {
"type" : "text",
"index" : "false"
},
"SKEWICE" : {
"type" : "text",
"index" : "false"
},
"WILDISED" : {
"type" : "text",
"index" : "false"
},
"WILDATUS" : {
"type" : "text",
"index" : "false"
},
"RRF" : {
"type" : "double",
"index" : "false"
},
"SRF" : {
"type" : "double",
"index" : "false"
},
"CNRF" : {
"type" : "double",
"index" : "false"
},
"CTRF" : {
"type" : "double",
"index" : "false"
},
"RANADDLE" : {
"type" : "double",
"index" : "false"
},
"STANDANSTRADDLE" : {
"type" : "double",
"index" : "false"
},
"CONSLE" : {
"type" : "double",
"index" : "false"
},
"CLIDLE" : {
"type" : "double",
"index" : "false"
},
"KNOCKOADDLE" : {
"type" : "text",
"index" : "false"
},
"RANGEFM" : {
"type" : "double",
"index" : "false"
},
"SMIUM" : {
"type" : "double",
"index" : "false"
},
"CONIUM" : {
"type" : "double",
"index" : "false"
},
"CLIEEMIUM" : {
"type" : "double",
"index" : "false"
},
"KNOREMIUM" : {
"type" : "text",
"index" : "false"
},
"COT" : {
"type" : "double",
"index" : "false"
},
"CLIEEDSPOT" : {
"type" : "double",
"index" : "false"
},
"IME" : {
"type" : "keyword"
},
"KKE" : {
"type" : "keyword"
}
}
}
}
}
My excel content as:-
Header : Actual header is quite lengthy as have lot many columns, please consider other column names similar to below in continuation.
SECURITY | SERVICEID | MEMBERID | VALUEDATE...
First row : Again column values as below some columns has blank values , I have mentioned above real template file (in mapping file above) which has all column values.
KKK-LMN 2 1815 6/25/2018
PPL-ORL 2 1815 6/25/2018
SLB-ORD 2 1815 6/25/2018
3. Kibana query output
Query :
GET /my_r_index/_search
{
"query": {
"match_all": {}
}
}
Outout:
{
"_index": "my_r_index",
"_type": "my_r_index",
"_id": "IjjIZWUBduulDsi0vYot",
"_score": 1,
"_source": {
"#version": "1",
"message": "XXX-XXX-XXX-USD,2,3190,2018-07-03,UNITED STATES,USD,300,60,Put,2042-12-19,,,,.009108041,q,,,,.269171754,q,,,,,.024127966,q,,,,68.414017367,q,,,,.298398645,q,,,,.502677959,q,,,,,0.040880692400344164,q,,,,,,,159.361792143,,,,.631296636,q,,,,.154877384,q,,42.93,N,Y,\n",
"#timestamp": "2018-08-23T07:56:06.515Z"
}
},
...Other similar records as above.
EDIT-3:
Sample output after using autodetect_column_names => true :-
{
"took": 4,
"timed_out": false,
"_shards": {
"total": 10,
"successful": 10,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 3,
"max_score": 1,
"hits": [
{
"_index": "indr",
"_type": "logs",
"_id": "hAF1aWUBS_wbCH7ZG4tW",
"_score": 1,
"_source": {
"2": "2",
"1815": "1815",
"message": """
PPL-ORD-XNYS-USD,2,1815,6/25/2018,UNITED STATES
""",
"SLB-ORD-XNYS-USD": "PPL-ORD-XNYS-USD",
"6/25/2018": "6/25/2018",
"#timestamp": "2018-08-24T01:03:26.436Z",
"UNITED STATES": "UNITED STATES",
"#version": "1"
}
},
{
"_index": "indr",
"_type": "logs",
"_id": "kP11aWUBctDorPcGHICS",
"_score": 1,
"_source": {
"2": "2",
"1815": "1815",
"message": """
SLBUSD,2,1815,4/22/2018,UNITEDSTATES
""",
"SLB-ORD-XNYS-USD": "SLBUSD",
"6/25/2018": "4/22/2018",
"#timestamp": "2018-08-24T01:03:26.436Z",
"UNITED STATES": "UNITEDSTATES",
"#version": "1"
}
},
{
"_index": "indr",
"_type": "logs",
"_id": "j_11aWUBctDorPcGHICS",
"_score": 1,
"_source": {
"2": "SERVICE",
"1815": "CLIENT",
"message": """
UNDERLYING,SERVICE,CLIENT,VALUATIONDATE,COUNTRY
""",
"SLB-ORD-XNYS-USD": "UNDERLYING",
"6/25/2018": "VALUATIONDATE",
"#timestamp": "2018-08-24T01:03:26.411Z",
"UNITED STATES": "COUNTRY",
"#version": "1"
}
}
]
}
}
I'm pretty certain your single document has an id of %{id}. The first problem comes from the fact that in your CSV file, you are not extracting a column whose name is id and that's what you're using in document_id => "%{id}" hence all rows are getting indexed with the id %{id} and each indexation deletes the previous. At the end, you have a single document which has been indexed as many times as the rows in your CSV.
Regarding the second issue, you need to fix the filter section like below:
filter {
csv {
separator => ","
autodetect_column_names => true
}
date {
match => [ "VALUATIONDATE", "M/dd/yyyy" ]
}
}
Also you need to fix your index template like this (I've only added the format setting in the VALUATIONDATE field:
{
"order": 0,
"template": "helloindex",
"settings": {
"index": {
"codec": "best_compression",
"refresh_interval": "60s",
"number_of_shards": "10",
"number_of_replicas": "1"
}
},
"mappings": {
"_default_": {
"_all": {
"enabled": false
},
"properties": {
"UNDERLYING": {
"type": "keyword"
},
"SERVICE": {
"type": "integer"
},
"CLIENT": {
"type": "integer"
},
"VALUATIONDATE": {
"type": "date",
"format": "MM/dd/yyyy"
},
"COUNTRY": {
"type": "keyword"
}
}
}
},
"aliases": {}
}

AWS SES Configset - Can't create an event destination to SNS using AWS cloud formation stack

I can't seem to be able to create a new AWS SES configset using AWS Cloud Formation stack. The error says 'YAML not well-formed'
Below is my json template for the CF stack:
"Resources" : {
"ConfigSet": {
"Type": "AWS::SES::ConfigurationSet",
"Properties": {
"Name": "CS_EMAIL_TRACKING"
}
},
"CWEventDestination": {
"Type": "AWS::SES::ConfigurationSetEventDestination",
"Properties": {
"ConfigurationSetName": "CS_EMAIL_TRACKING",
"EventDestination": {
"Name": "CS_EMAIL_TRACKING_CW_DESTINATION",
"Enabled": true,
"MatchingEventTypes": ["bounce", "complaint", "delivery", "open", "reject", "renderingFailure", "send"],
"CloudWatchDestination": {
"DimensionConfigurations": [{
"DimensionName": "AGS",
"DimensionValueSource": "messageTag",
"DefaultDimensionValue": "MY_AGS"
}, {
"DimensionName": "Component",
"DimensionValueSource": "messageTag",
"DefaultDimensionValue": "Mail"
}, {
"DimensionName": "ses:caller-identity",
"DimensionValueSource": "messageTag",
"DefaultDimensionValue": "shouldbeautoset"
}]
}
}
}
},
"SNSEventDestination": {
"Type": "AWS::SES::ConfigurationSetEventDestination",
"Properties": {
"ConfigurationSetName": "CS_EMAIL_TRACKING",
"EventDestination": {
"Name": "CS_EMAIL_TRACKING_SNS_DESTINATION",
"Enabled": true,
"MatchingEventTypes": ["bounce", "complaint", "delivery", "reject", "send"],
"SNSDestination": {
"TopicARN": "arn:aws:sns:us-east-1:99999999:SES-STATUS_TRACKING_TOPIC"
}
}
}
}
}
The above json looks good to me but......
Can anybody help? Am I missing something?
Thanks!
Edit: I got the stack working with parameters. Now, though I face another problem with SNSDestination being the EventDestination.
It says Unsupported property for EventDestination, even though the AWS documentation says that its a valid property:
Below is my final code:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS SES ConfigurationSet ${CRED_AGS} Template",
"Parameters": {
"ConfigSetName": {
"Type" : "String",
"Default" : "${CONFIGSET_NAME}"
},
"EventCWDestinationName" : {
"Type" : "String",
"Default" : "${CW_DESTINATION_NAME}"
},
"EventSNSDestinationName" : {
"Type" : "String",
"Default" : "${SNS_DESTINATION_NAME}"
},
"EventTypeBounce" : {
"Type" : "String",
"Default" : "bounce"
},
"EventTypeComplaint" : {
"Type" : "String",
"Default" : "complaint"
},
"EventTypeDelivery" : {
"Type" : "String",
"Default" : "delivery"
},
"EventTypeOpen" : {
"Type" : "String",
"Default" : "open"
},
"EventTypeReject" : {
"Type" : "String",
"Default" : "reject"
},
"EventTypeRenderingFailure" : {
"Type" : "String",
"Default" : "renderingFailure"
},
"EventTypeSend" : {
"Type" : "String",
"Default" : "send"
},
"DimensionValueSourceMsgTag" : {
"Type" : "String",
"Default" : "messageTag"
},
"DimensionNameAGS" : {
"Type" : "String",
"Default" : "AGS"
},
"DefaultDimensionValueAGS" : {
"Type" : "String",
"Default" : "${CRED_AGS}"
},
"DimensionNameComponent" : {
"Type" : "String",
"Default" : "Component"
},
"DefaultDimensionValueComponent" : {
"Type" : "String",
"Default" : "Mail"
},
"DimensionNameIdentity" : {
"Type" : "String",
"Default" : "ses:caller-identity"
},
"DefaultDimensionValueIdentity" : {
"Type" : "String",
"Default" : "shouldbeautoset"
}
},
"Resources": {
"ConfigSet" : {
"Type" : "AWS::SES::ConfigurationSet",
"Properties" : {
"Name" : {
"Ref" : "ConfigSetName"
}
}
},
"CWEventDestination" : {
"Type" : "AWS::SES::ConfigurationSetEventDestination",
"Properties" : {
"ConfigurationSetName" : {
"Ref": "ConfigSetName"
},
"EventDestination" : {
"Name" : {
"Ref" : "EventCWDestinationName"
},
"Enabled" : true,
"MatchingEventTypes" : [
{
"Ref" : "EventTypeBounce"
},
{
"Ref" : "EventTypeComplaint"
},
{
"Ref" : "EventTypeDelivery"
},
{
"Ref" : "EventTypeOpen"
},
{
"Ref" : "EventTypeReject"
},
{
"Ref" : "EventTypeRenderingFailure"
},
{
"Ref" : "EventTypeSend"
}
],
"CloudWatchDestination" : {
"DimensionConfigurations" : [
{
"DimensionName" : {
"Ref" : "DimensionNameAGS"
},
"DimensionValueSource" : {
"Ref" : "DimensionValueSourceMsgTag"
},
"DefaultDimensionValue" : {
"Ref": "DefaultDimensionValueAGS"
}
},
{
"DimensionName" : {
"Ref" : "DimensionNameComponent"
},
"DimensionValueSource" : {
"Ref" : "DimensionValueSourceMsgTag"
},
"DefaultDimensionValue" : {
"Ref" : "DefaultDimensionValueComponent"
}
},
{
"DimensionName" : {
"Ref" : "DimensionNameIdentity"
},
"DimensionValueSource" : {
"Ref" : "DimensionValueSourceMsgTag"
},
"DefaultDimensionValue" : {
"Ref" : "DefaultDimensionValueIdentity"
}
}
]
}
}
}
},
"SNSEventDestination" : {
"Type" : "AWS::SES::ConfigurationSetEventDestination",
"Properties" : {
"ConfigurationSetName" : {
"Ref": "ConfigSetName"
},
"EventDestination" : {
"Name" : {
"Ref" : "EventSNSDestinationName"
},
"Enabled" : true,
"MatchingEventTypes" : [
{
"Ref" : "EventTypeBounce"
},
{
"Ref" : "EventTypeComplaint"
},
{
"Ref" : "EventTypeDelivery"
},
{
"Ref" : "EventTypeReject"
},
{
"Ref" : "EventTypeSend"
}
],
"SNSDestination" : {
"TopicARN" : "${SNS_DELIVERY_TOPIC}"
}
}
}
}
}
}
Can someone pls help? BTW, I have the lastest AWS cli with me.
i think SNSDestination is not supported now. it probably is by default there when you set the configset
Cloudformation is case-sensitive, so it should be SnsDestination, not SNSDestination

Search any part of word in any column

I'm trying to search full_name, email or phone
For example
if i start input "+16", it should display all users with phone numbers start or contains "+16". The same with full name and email
My ES config is:
{
"users" : {
"mappings" : {
"user" : {
"properties" : {
"full_name" : {
"analyzer" : "trigrams",
"include_in_all" : true,
"type" : "string"
},
"phone" : {
"type" : "string",
"analyzer" : "trigrams",
"include_in_all" : true
},
"email" : {
"analyzer" : "trigrams",
"include_in_all" : true,
"type" : "string"
}
},
"dynamic" : "false"
}
},
"settings" : {
"index" : {
"creation_date" : "1472720529392",
"number_of_shards" : "5",
"version" : {
"created" : "2030599"
},
"uuid" : "p9nOhiJ3TLafe6WzwXC5Tg",
"analysis" : {
"analyzer" : {
"trigrams" : {
"filter" : [
"lowercase"
],
"type" : "custom",
"tokenizer" : "my_ngram_tokenizer"
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"max_gram" : "12",
"min_gram" : "2"
}
}
},
"number_of_replicas" : "1"
}
},
"aliases" : {},
"warmers" : {}
}
}
Searching for name 'Robert' by part of name
curl -XGET 'localhost:9200/users/_search?pretty' -d'
{
"query": {
"match": {
"_all": "rob"
}
}
}'
doesn't give expected result, only using full name.
Since your analyzer is set on the fields full_name, phone and email, you should not use the _all field but enumerate those fields in your multi_match query, like this:
curl -XGET 'localhost:9200/users/_search?pretty' -d'{
"query": {
"multi_match": {
"query": "this is a test",
"fields": [
"full_name",
"phone",
"email"
]
}
}
}'