Elasticsearch Query on indexes whose name is matching a certain pattern - regex

I have a couple of indexes in my Elasticsearch DB as follows
Index_2019_01
Index_2019_02
Index_2019_03
Index_2019_04
.
.
Index_2019_12
Suppose I want to search only on the first 3 Indexes.
I mean a regular expression like this:
select count(*) from Index_2019_0[1-3] where LanguageId="English"
What is the correct way to do that in Elasticsearch?

How can I query several indexes with certain names?
This can be achieved via multi-index search, which is a built-in capability of Elasticsearch. To achieve described behavior one should try a query like this:
POST /index_2019_01,index_2019_02/_search
{
"query": {
"match": {
"LanguageID": "English"
}
}
}
Or, using URI search:
curl 'http://<host>:<port>/index_2019_01,index_2019_02/_search?q=LanguageID:English'
More details are available here. Note that Elasticsearch requires index names to be lowercase.
Can I use a regex to specify index name pattern?
In short, no. It is possible to use index name in queries using a special "virtual" field _index but its use is limited. For instance, one cannot use a regexp against index name:
The _index is exposed as a virtual field — it is not added to the
Lucene index as a real field. This means that you can use the _index
field in a term or terms query (or any query that is rewritten to a
term query, such as the match, query_string or simple_query_string
query), but it does not support prefix, wildcard, regexp, or fuzzy
queries.
For instance, the query from above can be rewritten as:
POST /_search
{
"query": {
"bool": {
"must": [
{
"terms": {
"_index": [
"index_2019_01",
"index_2019_02"
]
}
},
{
"match": {
"LanguageID": "English"
}
}
]
}
}
}
Which employs a bool and a terms queries.
Hope that helps!

Why use POST when you are not adding any additional data to it.
I advise using GET for your case. Secondly, If the Index have similar names like in your case, you should be using an index pattern like in the query below,
GET /index_2019_*/_search
{
"query": {
"match": {
"LanguageID": "English"
}
}
}
OR in a URL
curl -XGET "http://<host>:<port>/index_2019_*/_search" -H 'Content-Type: application/json' -d'{"query": {"match":{"LanguageID": "English"}}}'

While searching for indices using a regex is not possible you might be able to use date math to take you a bit further.
You can look at the docs here
As an example, lets say you wish the last 3 months from those indices
that means that if we have
index_2019_01
index_2019_02
index_2019_03
index_2019_04
And today is 2019/04/20, we could use the following query to get 04,03 and 02
GET /<index-{now/M-0M{yyyy_MM}}>,<index-{now/M-1M{yyyy_MM}}>,<index-{now/M-2M{yyyy_MM}}>
I used M-0M for the first one so the query construction loop doesn't need a special case for the first index
Look at the docs regarding URL encoding this query and how to have literal braces in the index name, if a client is used the URL encoding is done for you (at least in the python client)

Related

Find string in between in kibana elastic search with regex like in splunk

In splunk, we can filter out dynamic string in between two strings.
Say for example,
<TextileType>Shirt</TextileType>
<TextileType>Trousers</TextileType>
<TextileType>Shirt</TextileType>
<TextileType>Trousers</TextileType>
<TextileType>Shirt</TextileType>
The output I am expecting:
Shirt - 3
Trousers - 2
I am able to do this in splunk, easily.
Picture copied from Google (not exact one)
How can I achieve this in Kibana ?
Tried many ways, but not able to do any regex as per my need.
Note: Here's the example json query, in which I need to add regex. In this example, I am just trying to search for "Shirt" manually, which I am expecting to get dynamically.
{
"query": {
"match": {
"text": {
"query": "Shirt",
"type": "phrase"
}
}
}
}
Considering data is in the sample index, you can use a wildcard search:
GET /sample/_search
{
"query": {
"wildcard":{
"column2":"*Shirt*"
}
}
}
Notice how it only returns results containing keyword Shirt
If you are looking to clean the data, you'd need to run it through a logstash pipeline to strip the XML tags and leave you with the text.

regex breaks when I use a colon(:)

I just started working with elastic search. By started working I mean I have to query an already running elastic database. Is there a good documentation of the regex they follow. I know about the one on their official site, but its not very helpful.
The more specific problem is that I want to query for lines of the sort:
10:02:37:623421|0098-TSOT {TRANSITION} {ID} {1619245525} {securityID} {} {fromStatus} {NOT_PRESENT} {toStatus} {WAITING}
or
01:01:36:832516|0058-CT {ADD} {0} {3137TTDR7} {23} {COM} {New} {0} {0} {52} {1}
and more of a similar structure. I don't want a generalized regex. If possible, could someone give me a regex expression for each of these that would run with elastic?
I noticed that it matches if the regexp matches with a substring too when I ran with:
query = {"query":
{"regexp":
{
"message": "[0-9]{2}"
}
},
"sort":
[
{"#timestamp":"asc"}
]
}
But it wont match anything if I use:
query = {"query":
{"regexp":
{
"message": "[0-9]{2}:.*"
}
},
"sort":
[
{"#timestamp":"asc"}
]
}
I want to write regex that are more specific and that are different for the two examples given near the top.
turns out my message is present in the tokenized form instead of the raw form, and : is one of the default delimiters of the tokenizer, in elastic. And as a reason, I can't use regexp query on the whole message because it matches it with each token individually.

How to write a regexp in elastisearch so that it gives me URLs with numbers?

I am trying to write a query in Kibana which works with Elastisearch Query DSL. The basic filter is as follows:
{
"query": {
"match": {
"path": {
"query": "/abc/",
"type": "phrase"
}
}
}
}
Now I need to write a query so that it gives me "path" which is of the form /abc/(0-9)/.
I tried the reference provided here but it does not make sense to me (I am not well versed with Elasticsearch):
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-regexp-query.html
I would like to filter out results which are of the form path = /abc/12345/
This RegEx might help you to do so:
\x22query\x22:\s\x22(\/.*)\x22
It creates a target capturing group, where your desired output is and you might be able to call it using $1.
You may add additional boundaries to your pattern, if you wish, such as this RegEx:
\x22query\x22:\s\x22([\/a-z0-9]+)\x22

How can I use scan/scroll with pagination and sort in ElasticSearch?

I have a ES DB storing history records from a process I run every day. Because I want to show only 20 records per page in the history (order by date), I was using pagination (size + from_) combined scroll, which worked just fine. But when I wanted to used sort in the query it didn't work. So I found that scroll with sort don't work. Looking for another alternative I tried the ES helper scan which works fine for scrolling and sorting the results, but with this solution pagination doesn't seem to work, which I don't understand why since the API says that scan sends all the parameters to the underlying search function. So my question is if there is any method to combine the three options.
Thanks,
Ruben
When using the elasticsearch.helpers.scan function, you need to pass preserve_order=True to enable sorting.
(Tested using elasticsearch==7.5.1)
yes, you can combine scroll with sort, but, when you can sort string, you will need change the mapping for it works fine, Documentation Here
In order to sort on a string field, that field should contain one term
only: the whole not_analyzed string. But of course we still need the
field to be analyzed in order to be able to query it as full text.
The naive approach to indexing the same string in two ways would be to
include two separate fields in the document: one that is analyzed for
searching, and one that is not_analyzed for sorting.
"tweet": {
"type": "string",
"analyzer": "english",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
The main tweet field is just the same as before: an analyzed full-text field.
The new tweet.raw subfield is not_analyzed.
Now, or at least as soon as we have reindexed our data, we can use the
tweet field for search and the tweet.raw field for sorting:
GET /_search
{
"query": {
"match": {
"tweet": "elasticsearch"
}
},
"sort": "tweet.raw"
}

Django Haystack + Elasticsearch - how to return results with a small edit distance from query?

So we are using Django-haystack with the Elasticsearch backend to index a bunch of data for searching. It is very fast and is working great for the most part, but I notice something that I want that seems to be absent. For example, consider the search query "cellar door". I would want a query that is slightly off, like a misspelling, e.g. "cellar dor" or "celar door" to match results for "cellar door". If I try queries like this with our current setup it returns 0 results. I tried using an EdgeNgramField in the search index on the field we wanted to index, but this seems to have absolutely no effect.
Thanks.
Use suggest to perform spell check.
curl -XPOST 'localhost:9200/index/_search?search_type=count' -d '{
{
"suggest": {
"body": {
"text": "celar door",
"term": {
"field": "summary",
"analyzer": "simple"
}
}
}
}'