How to using String replace in velocity template (AWS appsync + elasticsearch)? - amazon-web-services

I'm writing a appsync query to search for records by phone number from elastic (using velocity template).
The data stored on the elastic has the form "0123456789" but the request may take the form "012-123-1234". So I intended to use the string replace function to remove "-" character. However, my code is returning the following error:
"message": "Lexical error, Encountered: \" _ \ "(95), after: \". \ "at * unset * [line 11, column 51]"
I am not sure if my writing is correct or not, please help.
This is my code:
{
"version":"2017-02-28",
"operation":"GET",
"path":"/res/res/_search",
"params":{
"headers":{},
"queryString":{},
"body":{
"from":$util.defaultIfNull($ctx.args.nextToken, 0),
"size":$util.defaultIfNull($ctx.args.limit, 20),
"query": {
"match": { "phoneNumber": "$context.args.phoneNumber".replace('-', '') }
}
}
}
}

well, I found the error, it's wrong position of " character.
"match": { "phoneNumber": "$context.args.phoneNumber".replace('-', '') }
=>
"match": { "phoneNumber": "$context.args.phoneNumber.replace('-', '')" }

Related

Kibana Query Language - find numbers in a field

I am struggling with a simple query that is supposed to work based on many tutorials but cannot make it work. Havin log field
Request sent, method=GET, headers={}, queryParams={forceArray=[true]}, entity=null, payload length=null} playerId=102
I am trying to get playerId with 3 digits value. Following query fails
log: /playerId=[0-9]{1,3}/
with KQLSyntaxError: Expected AND, OR, end of input, whitespace but "{" found. and log: /playerId=[0-9]{1,3}/
but supposed to work according to https://dzone.com/articles/getting-started-with-kibana-advanced-searches
This log: /playerId=[0-9][0-9][0-9]/returns basically everything with a single '0' character
This log: /playerId=*/ for some mysterious reasons returns nothing.
Edit
regular elastic search lucene based query does not work either
{
"query": {
"regexp": {
"log": {
"value": "*playerId*"
}
}
}
}
mapping:
{
"my-index" : {
"mappings" : {
"log" : {
"full_name" : "log",
"mapping" : {
"log" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
}
any help appreciated
Edit
I validated my regex queries in https://regex101.com/
and they all work.
Edit 2
this works
"query": {
"match": {
"log": "playerId"
}
}
this return empty hits
"query": {
"regexp": {
"log": "playerId"
}
}
regards
This is because Kibana uses KQL (Kibana Query Language) by default and that doesn't support regular expressions.
You need to switch to the Lucene Query Language with the query string syntax which supports the regular expression you're trying.
Just click on KQL at the right end of the search bar to change the search syntax.
Also worth noting that regular expression queries are real performance hogger. You should really parse your logs before ingesting them so you can query the playerId field independently.
In any case, if you really want to do it that way, your query is not that far off from the real thing. Here is the correct version that will work for your case:
{
"query": {
"query_string": {
"query": "/.*playerId=[0-9]{3}/",
"default_field": "log.keyword"
}
}
}

ElasticSearch wildcard not returning when value has special characters

I have an elastic search service that fetches when you type into a text input to then populate a table. The search is working (returning filtered data) correctly for all alphanumeric values but not special characters (hyphens in particular). For example for the country Timor-Leste if I pass in Timor as the term I get the result but as soon as I add the hyphen (Timor-) I get an empty array response.
const queryService = {
search(tableName, field, term) {
// If there is no search term, run the wildcard search with 20 values
// for the smaller lists to be pre-populated, like "Gender"
return `
{
"size": ${term ? 200 : 20},
"query": {
"bool": {
"must": [
{
"match": {
"tablename": "${tableName}"
}
},
{
"wildcard": {
"${field}": {
"value": "${term ? `*${term.trim()}*` : '*'}",
"boost": 1.0,
"rewrite": "constant_score"
}
}
}
]
}
}
}
`;
},
};
Is there a way I can modify my wildcard request to allow hyphens? The other response I've seen on here has suggested using "analyze_wildcard": true which hasn't worked. I've also tried to manually escape by putting a \ before each hyphen with .replace.
It all boils down to Elasticsearch analyzers.
By default, all text fields will be run through the standard analyzer, e.g.:
GET _analyze/
{
"text": ["Timor-Leste"],
"analyzer": "standard"
}
This will lowercase your input, strip any special chars, and produce the tokens:
["timor", "leste"]
If you'd like to forgo this default process, add a .keyword mapping:
PUT your-index/
{
"mappings": {
"properties": {
"country": {
"type": "text",
"fields": { <---
"keyword": {
"type": "keyword"
}
}
}
}
}
}
Then reindex your docs, and when dynamically constructing the wildcard query with the newly created .keyword field, make sure the hyphen (and all other special chars) is properly escaped:
POST your-index/_search
{
"query": {
"wildcard": {
"country.keyword": {
"value": "*Timor\\-*" <---
}
}
}
}

ElasticSearch regexp query of a path

So far I've used a query that would match paths and get aggregations of those paths:
{
"query": {
"terms": {
"path.keyword": [
"/api/v1.0/cc-dashboard/aggregated",
"/api/v1.1/cc-dashboard/aggregated",
"/api/v1.2/cc-dashboard/aggregated",
"/api/v1.3/cc-dashboard/aggregated"
]
}
},
"size": 0,
"aggs": { ...
Since the only difference between the paths is the version number (which keeps changing) I thought about using Regexp query.
In a normal regex I would search for \/api\/v1\.\d\/cc-dashboard\/aggregated.
I know ElasticSearch uses different reserved characters for this and I've tried everything I know, but the search comes back without hits.
Any Thoughts?
I think there are a couple of things to watch out for here. First make sure that path.keyword is actually of the type "keyword" or else you will have problem matching b/c you are actually trying to match against tokens and Elasticsearch will split on /. Second it doesn't look like Elasticsearch supports \d to escape for a digit, but it does allow [0-9]. Third to escape the . I had to use two backslashes \\.
So all together now:
PUT /stackoverflow
{
"mappings": {
"properties": {
"path.keyword": {
"type": "keyword"
}
}
}
}
POST /stackoverflow/_doc/1
{
"path.keyword": "/api/v1.0/cc-dashboard/aggregated"
}
POST /stackoverflow/_doc/2
{
"path.keyword": "/api/v1.1/cc-dashboard/aggregated"
}
POST /stackoverflow/_doc/3
{
"path.keyword": "/api/not/cc-dashboard/aggregated"
}
GET /stackoverflow/_search
GET /stackoverflow/_search
{
"query": {
"regexp": {
"path.keyword": {
"value": "/api/v1\\.[0-9]/cc-dashboard/aggregated"
}
}
}
}
DELETE /stackoverflow

negative lookahead Regexp doesnt work in ES dsl query

The mapping of my Elastic search looks like below:
{
"settings": {
"index": {
"number_of_shards": "5",
"number_of_replicas": "1"
}
},
"mappings": {
"node": {
"properties": {
"field1": {
"type": "keyword"
},
"field2": {
"type": "keyword"
},
"query": {
"properties": {
"regexp": {
"properties": {
"field1": {
"type": "keyword"
},
"field2": {
"type": "keyword"
}
}
}
}
}
}
}
}
}
Problem is :
I am forming ES queries using elasticsearch_dsl Q(). It works perfectly fine in most of the cases when my query contains any complex regexp. But it totally fails if it contains regexp character '!' in it. It doesn't give any result when the search term contains '!' in it.
For eg:
1.) Q('regexp', field1 = "^[a-z]{3}.b.*") (works perfectly)
2.) Q('regexp', field1 = "^f04.*") (works perfectly)
3.)Q('regexp', field1 = "f00.*") (works perfectly)
4.) Q('regexp', field1 = "f04baz?") (works perfectly)
Fails in below case:
5.) Q('regexp', field1 = "f04((?!z).)*") (Fails with no results at all)
I tried adding "analyzer":"keyword" along with "type":"keyword" as above in the fields, but in that case nothing works.
In the browser i tried to check how analyzer:keyword will work on the input on the case it fails:
http://localhost:9210/search/_analyze?analyzer=keyword&text=f04((?!z).)*
Seems to look fine here with result:
{
"tokens": [
{
"token": "f04((?!z).)*",
"start_offset": 0,
"end_offset": 12,
"type": "word",
"position": 0
}
]
}
I'm running my queries like below:
search_obj = Search(using = _conn, index = _index, doc_type = _type).query(Q('regexp', field1 = "f04baz?"))
count = search_obj.count()
response = search_obj[0:count].execute()
logger.debug("total nodes(hits):" + " " + str(response.hits.total))
PLease help, its really a annoying problem as all the regex characters work fine in all the queries except !.
Also, how do i check what analyzer is currently applied with above setting in my mappings?
ElasticSearch Lucene regex engine does not support any type of lookarounds. The ES regex documentation is rather ambiguous saying matching everything like .* is very slow as well as using lookaround regular expressions (which is not only ambiguous, but also wrong since lookarounds, when used wisely, may greatly speed up regex matching).
Since you want to match any string that contains f04 and does not contain z, you may actually use
[^z]*fo4[^z]*
Details
[^z]* - any 0+ chars other than z
fo4 - fo4 substring
[^z]* - any 0+ chars other than z.
In case you have a multicharacter string to "exclude" (say, z4 rather than z), you may use your approach using a complement operator:
.*f04.*&~(.*z4.*)
This means almost the same but does not support line breaks:
.* - any chars other than newline, as many as possible
f04 - f04
.* - any chars other than newline, as many as possible
& - AND
~(.*z4.*) - any string other than the one having z4

ElasticSearch and Regex queries

I am trying to query for documents that have dates within the body of the "content" field.
curl -XGET 'http://localhost:9200/index/_search' -d '{
"query": {
"regexp": {
"content": "^(0[1-9]|[12][0-9]|3[01])[- /.](0[1-9]|1[012])[- /.]((19|20)\\d\\d)$"
}
}
}'
Getting closer maybe?
curl -XGET 'http://localhost:9200/index/_search' -d '{
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"regexp":{
"content" : "^(0[1-9]|[12][0-9]|3[01])[- /.](0[1-9]|1[012])[- /.]((19|20)\\d\\d)$"
}
}
}
}'
My regex seems to have been off. This regex has been validated on regex101.com The following query still returns nothing from the 175k documents I have.
curl -XPOST 'http://localhost:9200/index/_search?pretty=true' -d '{
"query": {
"regexp":{
"content" : "/[0-9]{4}-[0-9]{2}-[0-9]{2}|[0-9]{2}-[0-9]{2}-[0-9]{4}|[0-9]{2}/[0-9]{2}/[0-9]{4}|[0-9]{4}/[0-9]{2}/[0-9]{2}/g"
}
}
}'
I am starting to think that my index might not be set up for such a query. What type of field do you have to use to be able to use regular expressions?
mappings: {
doc: {
properties: {
content: {
type: string
}title: {
type: string
}host: {
type: string
}cache: {
type: string
}segment: {
type: string
}query: {
properties: {
match_all: {
type: object
}
}
}digest: {
type: string
}boost: {
type: string
}tstamp: {
format: dateOptionalTimetype: date
}url: {
type: string
}fields: {
type: string
}anchor: {
type: string
}
}
}
I want to find any record that has a date and graph the volume of documents by that date. Step 1. is to get this query working. Step 2. will be to pull the dates out and group them by them accordingly. Can someone suggest a way to get the first part working as I know the second part will be really tricky.
Thanks!
You should read Elasticsearch's Regexp Query documentation carefully, you are making some incorrect assumptions about how the regexp query works.
Probably the most important thing to understand here is what the string you are trying to match is. You are trying to match terms, not the entire string. If this is being indexed with StandardAnalyzer, as I would suspect, your dates will be separated into multiple terms:
"01/01/1901" becomes tokens "01", "01" and "1901"
"01 01 1901" becomes tokens "01", "01" and "1901"
"01-01-1901" becomes tokens "01", "01" and "1901"
"01.01.1901" actually will be a single token: "01.01.1901" (Due to decimal handling, see UAX #29)
You can only match a single, whole token with a regexp query.
Elasticsearch (and lucene) don't support full Perl-compatible regex syntax.
In your first couple of examples, you are using anchors, ^ and $. These are not supported. Your regex must match the entire token to get a match anyway, so anchors are not needed.
Shorthand character classes like \d (or \\d) are also not supported. Instead of \\d\\d, use [0-9]{2}.
In your last attempt, you are using /{regex}/g, which is also not supported. Since your regex needs to match the whole string, the global flag wouldn't even make sense in context. Unless you are using a query parser which uses them to denote a regex, your regex should not be wrapped in slashes.
(By the way: How did this one validate on regex101? You have a bunch of unescaped /s. It complains at me when I try it.)
To support this sort of query on such an analyzed field, you'll probably want to look to span queries, and particularly Span Multiterm and Span Near. Perhaps something like:
{
"span_near" : {
"clauses" : [
{ "span_multi" : {
"match": {
"regexp": {"content": "0[1-9]|[12][0-9]|3[01]"}
}
}},
{ "span_multi" : {
"match": {
"regexp": {"content": "0[1-9]|1[012]"}
}
}},
{ "span_multi" : {
"match": {
"regexp": {"content": "(19|20)[0-9]{2}"}
}
}}
],
"slop" : 0,
"in_order" : true
}
}
For newer elasticsearch versions (tested 8.5).
We can use .keyword in the field. It will match the whole sentence.
{
"size": 10,
"_source": [
"load",
"unload"
],
"query": {
"bool": {
"should": [
{
"regexp": {
"load.keyword": {
"value": ".*Search Term.*",
"flags": "ALL"
}
}
}
]
}
}
}