Writing an if statement for the value in a dictionary - if-statement

I'm taking an intro game design class and am writing a text adventure. I've set up a dictionary for the game map. For example:
"HALLWAY": {
"name": "HALLWAY",
"desc": "This is a hallway in a maze."
"exits": [{
"exit": "NORTH",
"target": "DEADEND"
}],
"items": []
Early on in the game you have the option to "TAKE" an item. IF the player has taken the item I want the target to be a different location (Not a dead end) ELSE the target is a dead end. How can I write the if/else statement into the value of the target? Or is there a more simple way of doing this?

<!DOCTYPE html>
<script>
'use strict';
let input_data = {
"HALLWAY":
{
"name": "HALLWAY"
, "desc": "This is a hallway in a maze."
, "exits": [{ "exit": "NORTH", "target": "DEADEND" }]
, "items": []
}};
//input_data["HALLWAY"]["exits"][0]["exit"] = "SOUTH";
// ABOVE STATEMENT CAN BE USED TO UPDATE THE VALUE
alert("Value of exit: " + input_data["HALLWAY"]["exits"][0]["exit"]);
alert("Value of target: " + input_data["HALLWAY"]["exits"][0]["target"]);
# Now assume that somehow the value of items has changed
# I am doing it manually
input_data["HALLWAY"]["exits"][0]["items"] = { "item1": "value1" };
alert("Value of items: " + input_data["HALLWAY"]["exits"][0]["items"]["item1"]);
# --One way is to ----- this is the condition that you check if the item has changed
if(input_data["HALLWAY"]["exits"][0]["items"] != null)
input_data["HALLWAY"]["exits"][0]["target"] = "target changed";
# ---- the other way --- would be to check the count of items list whether it has changed.
alert("Value of target: " + input_data["HALLWAY"]["exits"][0]["target"]);

Related

customized sorting using search term in django

I am searching a term "john" in a list of dict ,
I have a list of dict like this :
"response": [
{
"name": "Alex T John"
},
{
"name": "Ajo John"
},
{
"name": "John",
}]
I am using :
response_query = sorted(response, key = lambda i: i['name'])
response_query return ascending order of result only but I need a result with first name as a priority.
Expected result:
{
"name": "John"
},
{
"name": "Ajo John"
},
{
"name": "Alex T John",
}
The first name containing search term should appear first.
If you need to sort with priorities you can try a key-function that returns tuple. In your particular case, as far as I got the question, this function will work fine:
response_query = sorted(
response,
key=lambda i: (len(i['name'].split()) > 1, i['name'])
)
In other words, I added the condition len(i['name'].split()) > 1 that return False (it will go first) if the name consists of one word only, else True.
For the case, if you need the priority condition as the name starts with the term you used in the search, the result would be:
term = 'john'
...
response_query = sorted(
response,
key=lambda i: (not i['name'].lower().startswith(term), i['name'])
)

jsonPath expression expected value but return list of values

I want to check response "class_type" value has "REGION".
I test springboot API using mockMvc.
the MockHttpServletResponse is like this.
Status = 200
Error message = null
Headers = {Content-Type=[application/json;charset=UTF-8]}
Content type = application/json;charset=UTF-8
Body =
{"result":true,
"code":200,
"desc":"OK",
"data":{"total_count":15567,
"items": ...
}}
this is whole response object.
Let's take a closer look, especially items.
"items": [
{
"id": ...,
"class_type": "REGION",
"region_type": "MULTI_CITY",
"class": "com.model.Region",
"code": "AE-65GQ6",
...
},
{
"id": "...",
"class_type": "REGION",
"region_type": "CITY",
"class": "com.model.Region",
"code": "AE-AAN",
...
},
I tried using jsonPath.
#When("User wants to get list of regions, query is {string} page is {int} pageSize is {int}")
public void userWantsToGetListOfRegionsQueryIsPageIsPageSizeIs(String query, int page, int pageSize) throws Exception {
mockMvc().perform(get("/api/v1/regions" ))
.andExpect(status().is2xxSuccessful())
.andDo(print())
.andExpect(jsonPath("$.data", is(notNullValue())))
.andExpect(jsonPath("$.data.total_count").isNumber())
.andExpect(jsonPath("$.data.items").isArray())
.andExpect(jsonPath("$.data.items[*].class_type").value("REGION"));
log.info("지역 목록");
}
but
jsonPath("$.data.items[*].class_type").value("REGION")
return
java.lang.AssertionError: Got a list of values ["REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION","REGION"] instead of the expected single value REGION
I want to just check "$.data.items[*].class_type" has "REGION".
How can I change this?
One option would be to check whether you have elements in your array which have the class_type equal to 'REGION':
public static final String REGION = "REGION";
mockMvc().perform(get("/api/v1/regions"))
.andExpect(jsonPath("$.data.items[?(#.class_type == '" + REGION + "')]").exists());

Repeatedly send API request based on parameters in Postman

I have a database table with approx 4,000 records (currently). A response to an API call (POST, JSON) gives me data of this table for a maximum of 1,000 records per API call. A parameter ‘PageNo’ defines which of the 4,000 records are selected (e.g. PageNo = 1 gives me record 1-1000). The header data of the response includes a ‘PageCount’, in my example 4. I am able to retrieve that ‘PageCount’ and the test below loops through the PageNo (result in Postman Console = 1 2 3 4).
How I can call the same request repeatedly in a loop and use the values of the PageNo (i) as a parameter for that request like so:
{{baseUrl}}/v1/Units/Search?PageNo={{i}}
In my example I would expect the request to run 4 times with PageNo2 = 1, 2, 3, 4.
I am aware that I can use a CSV file and loop through the request in Collection Runner but PageCount changes (i.e. the number of records in the table change) and I need to run this loop frequently so creating a new CSV file for each loop is not really an option.
Postman Test:
pm.environment.set('Headers2', JSON.stringify(pm.response.headers));
var Headers2 = JSON.stringify(pm.response.headers);
pm.environment.set('PageCount2', JSON.parse(Headers2)[10].value);
var i;
for (i = 1; [i] <= pm.environment.get('PageCount2'); i++) {
console.log(i);
postman.setNextRequest('custom fields | json Copy');
}
Postman Request:
{
"Location":"{{TestingLocation}}",
"Fields":[
"StockNo",
"BrandDesc"
],
"Filters": {
"StatusCode":"{{TestingUnitSearchStatusCode}}"
},
"PageSize":1000,
"PageNo" : "{{i}}"
}
With postman.setNextRequest() you can set the calling request as the same request. But you need an exit strategy, otherwise that request would be called infinite times. This only works with the collection runner.
On your first(!) request, store the amount of pages and create a counter.
Increment the counter. If it is smaller than the amount of pages, set the current request as the next request.
Else, do not set a next request. The collection runner will continue with its normal flow. Optionally remove the pages and counter variables.
You can let the request detect that it is its first iteration by not having initialized the amount of pages and counter variables.
Performed a POC for your use-case using postman-echo API.
Pre-req script --> Takes care of the initial request to the endpoint to retrieve the PageSize and set it as an env var. Also initializes the iterationCount to 1 (as env var)
Test Script --> Checks for the current iteration number and performs tests.
Here's a working postman collection.
If you're familiar with newman, you can just run:
newman run <collection-name>
You can also import this collection in Postman app and use collection runner.
{
"info": {
"_postman_id": "1d7669a6-a1a1-4d01-a162-f8675e01d1c7",
"name": "Loop Req Collection",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "Run_Req",
"event": [
{
"listen": "test",
"script": {
"id": "dac5e805-548b-4bdc-a26e-f56500414208",
"exec": [
"var iterationCount = pm.environment.get(\"iterationCount\");",
"if (pm.environment.get(\"iterationCount\") <= pm.environment.get(\"pageSize\")) {",
" console.log(\"Iteration Number: \" + iterationCount);",
"",
" pm.test(\"Status code is 200\", function() {",
" pm.response.to.have.status(200);",
" });",
"",
" pm.test(\"Check value of pageNo in body is equal to iterationCount\", function() {",
" var jsonData = pm.response.json();",
" return jsonData.json.PageNo == iterationCount;",
"",
" });",
" iterationCount++;",
" pm.environment.set(\"iterationCount\", iterationCount);",
" postman.setNextRequest('Run_Req');",
"}",
"if (pm.environment.get(\"iterationCount\") > pm.environment.get(\"pageSize\")) {",
" console.log(\"Cleanup\");",
" pm.environment.unset(\"pageSize\");",
" pm.environment.set(\"iterationCount\", 1);",
" postman.setNextRequest('');",
" pm.test(\"Cleanup Success\", function() {",
" if (pm.environment.get(\"pageSize\") == null && pm.environment.get(\"iterationCount\") == null) {",
" return true;",
" }",
" });",
"}"
],
"type": "text/javascript"
}
},
{
"listen": "prerequest",
"script": {
"id": "3d43c6c7-4a9b-46cf-bd86-c1823af5a68e",
"exec": [
"if (pm.environment.get(\"pageSize\") == null) {",
" pm.sendRequest(\"https://postman-echo.com/response-headers?pageSize=4\", function(err, response) {",
" var pageSize = response.headers.get('pageSize');",
" var iterationCount = 1;",
" console.log(\"pre-req:pageSize= \" + pageSize);",
" console.log(\"pre-req:iterationCount= \" + iterationCount);",
" pm.environment.set(\"pageSize\", pageSize);",
" pm.environment.set(\"iterationCount\", iterationCount);",
" });",
"",
"}"
],
"type": "text/javascript"
}
}
],
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"name": "Content-Type",
"value": "application/json",
"type": "text"
}
],
"body": {
"mode": "raw",
"raw": "{\n \"Location\":\"{{TestingLocation}}\",\n \"Fields\":[\n \"StockNo\",\n \"BrandDesc\"\n ],\n \"Filters\": {\n \"StatusCode\":\"{{TestingUnitSearchStatusCode}}\"\n },\n \"PageSize\":1000,\n \"PageNo\" : \"{{iterationCount}}\"\n}"
},
"url": {
"raw": "https://postman-echo.com/post",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
"post"
]
}
},
"response": []
}
]
}

MongoDB Search and Sort, with Number of Matches and Exact Match

I want to create a small MongoDB Search Query where I want to sort the result set based exact match followed by no. of matches.
For eg. if I have following labels
Physics
11th-Physics
JEE-IIT-Physics
Physics-Physics
Then, if I search for "Physics" it should sort as
Physics
Physics-Physics
11th-Physics
JEE-IIT-Physics
Looking for the sort of "scoring" you are talking about here is an excercise in "imperfect solutions". In this case, the "best fit" here starts with "text search", and "imperfect" is the term to consider first when working with the text search capabilties of MongoDB.
MongoDB is "not" a dedicated "text search" product, nor is it ( like most databases ) trying to be one. Full capabilites of "text search" is reserved for dedicated products that do that as there area of expertise. So maybe not the best fit, but "text search" is given as an option for those who can live with the limitations and don't want to implement another engine. Or Yet! At least.
With that said, let's look at what you can do with the data sample as given. First set up some data in a collection:
db.junk.insert([
{ "data": "Physics" },
{ "data": "11th-Physics" },
{ "data": "JEE-IIT-Physics" },
{ "data": "Physics-Physics" },
{ "data": "Something Unrelated" }
])
Then of course to "enable" the text search capabilties, then you need to index at least one of the fields in the document with the "text" index type:
db.junk.createIndex({ "data": "text" })
Now that is "ready to go", let's have a look at a first basic query:
db.junk.find(
{ "$text": { "$search": "\"Physics\"" } },
{ "score": { "$meta": "textScore" } }
).sort({ "score": { "$meta": "textScore" } })
That is going to give results like this:
{
"_id" : ObjectId("55af83b964876554be823f33"),
"data" : "Physics-Physics",
"score" : 1.5
}
{
"_id" : ObjectId("55af83b964876554be823f30"),
"data" : "Physics",
"score" : 1
}
{
"_id" : ObjectId("55af83b964876554be823f31"),
"data" : "11th-Physics",
"score" : 0.75
}
{
"_id" : ObjectId("55af83b964876554be823f32"),
"data" : "JEE-IIT-Physics",
"score" : 0.6666666666666666
}
So that is "close" to your desired result, but of course there is no "exact match" component. In addition, the logic here used by the text search capabilities with the $text operator means that "Physics-Physics" is the preferred match here.
This is because then engine does not recognize "non words" such as the "hyphen" in between. To it, the word "Physics" appears several times in the indexed content for the document, therefore it has a higher score.
Now the rest of your logic here depends on the application of "exact match" and what you mean by that. If you are looking for "Physics" in the string and "not" surrounded by "hyphens" or other characters then the following does not suit. But you can just match a field "value" that is "exactly" just "Physics":
db.junk.aggregate([
{ "$match": {
"$text": { "$search": "Physics" }
}},
{ "$project": {
"data": 1,
"score": {
"$add": [
{ "$meta": "textScore" },
{ "$cond": [
{ "$eq": [ "$data", "Physics" ] },
10,
0
]}
]
}
}},
{ "$sort": { "score": -1 } }
])
And that will give you a result that both looks at the "textScore" produced by the engine and then applies some math with a logical test. In this case where the "data" is exactly equal to "Physics" then we "weight" the score by an additional factor using $add:
{
"_id": ObjectId("55af83b964876554be823f30"),
"data" : "Physics",
"score" : 11
}
{
"_id" : ObjectId("55af83b964876554be823f33"),
"data" : "Physics-Physics",
"score" : 1.5
}
{
"_id" : ObjectId("55af83b964876554be823f31"),
"data" : "11th-Physics",
"score" : 0.75
}
{
"_id" : ObjectId("55af83b964876554be823f32"),
"data" : "JEE-IIT-Physics",
"score" : 0.6666666666666666
}
That is what the aggregation framework can do for you, by allowing manipulation of the returned data with additional conditions. The end result is passed to the $sort stage ( notice it is reversed in descending order ) to allow that new value to be to sorting key.
But the aggregation framework can really only deal with "exact matches" like this on strings. There is no facility at present to deal with regular expression matches or index positions in strings that return a meaningful value for projection. Not even a logical match. And the $regex operation is only used to "filter" in queries, so not of use here.
So if you were looking for something in a "phrase" thats was a bit more invovled than a "string equals" exact match, then the other option is using mapReduce.
This is another "imperfect" approach as the limitations of the mapReduce command mean that the "textScore" from such a query by the engine is "completely gone". While the actual documents will be selected correctly, the inherrent "ranking data" is not available to the engine. This is a by-product of how MongoDB "projects" the "score" into the document in the first place, and "projection" is not a feature available to mapReduce.
But you can "play with" the strings using JavaScript, as in my "imperfect" sample:
db.junk.mapReduce(
function() {
var _id = this._id,
score = 0;
delete this._id;
score += this.data.indexOf(search);
score += this.data.lastIndexOf(search);
emit({ "score": score, "id": _id }, this);
},
function() {},
{
"out": { "inline": 1 },
"query": { "$text": { "$search": "Physics" } },
"scope": { "search": "Physics" }
}
)
Which gives results like this:
{
"_id" : {
"score" : 0,
"id" : ObjectId("55af83b964876554be823f30")
},
"value" : {
"data" : "Physics"
}
},
{
"_id" : {
"score" : 8,
"id" : ObjectId("55af83b964876554be823f33")
},
"value" : {
"data" : "Physics-Physics"
}
},
{
"_id" : {
"score" : 10,
"id" : ObjectId("55af83b964876554be823f31")
},
"value" : {
"data" : "11th-Physics"
}
},
{
"_id" : {
"score" : 16,
"id" : ObjectId("55af83b964876554be823f32")
},
"value" : {
"data" : "JEE-IIT-Physics"
}
}
My own "silly little algorithm" here is basically taking both the "first" and "last" index position of the matched string here and adding them together to produce a score. It's likely not what you really want, but the point is that if you can code your logic in JavaScript, then you can throw it at the engine to produce the desired "ranking".
The only real "trick" here to remember is that the "score" must be the "preceeding" part of the grouping "key" here, and that if including the orginal document _id value then that composite key part must be renamed, otherwise the _id will take precedence of order.
This is just part of mapReduce where as an "optimization" all output "key" values are sorted in "ascending order" before being processed by the reducer. Which of course does nothing here since we are not "aggregating", but just using the JavaScript runner and document reshaping of mapReduce in general.
So the overall note is, those are the available options. None of them perfect, but you might be able to live with them or even just "accept" the default engine result.
If you want more then look at external "dedicated" text search products, which would be better suited.
Side Note: The $text searches here are preferred over $regex because they can use an index. A "non-anchored" regular expression ( without the caret ^ ) cannot use an index optimally with MongoDB. Therefore the $text searches are generally going to be a better base for finding "words" within a phrase.
One more way is using the $indexOfCp aggregation operator to get the index of matched string and then apply sort on the indexed field
Data insertion
db.junk.insert([
{ "data": "Physics" },
{ "data": "11th-Physics" },
{ "data": "JEE-IIT-Physics" },
{ "data": "Physics-Physics" },
{ "data": "Something Unrelated" }
])
Query
const data = "Physics";
db.junk.aggregate([
{ "$match": { "data": { "$regex": data, "$options": "i" }}},
{ "$addFields": { "score": { "$indexOfCP": [{ "$toLower": "$data" }, { "$toLower": data }]}}},
{ "$sort": { "score": 1 }}
])
Here you can test the output
[
{
"_id": ObjectId("5a934e000102030405000000"),
"data": "Physics",
"score": 0
},
{
"_id": ObjectId("5a934e000102030405000003"),
"data": "Physics-Physics",
"score": 0
},
{
"_id": ObjectId("5a934e000102030405000001"),
"data": "11th-Physics",
"score": 5
},
{
"_id": ObjectId("5a934e000102030405000002"),
"data": "JEE-IIT-Physics",
"score": 8
}
]

Implement auto-complete feature using MongoDB search

I have a MongoDB collection of documents of the form
{
"id": 42,
"title": "candy can",
"description": "canada candy canteen",
"brand": "cannister candid",
"manufacturer": "candle canvas"
}
I need to implement auto-complete feature based on the input search term by matching in the fields except id. For example, if the input term is can, then I should return all matching words in the document as
{ hints: ["candy", "can", "canada", "canteen", ...]
I looked at this question but it didn't help. I also tried searching how to do regex search in multiple fields and extract matching tokens, or extracting matching tokens in a MongoDB text search but couldn't find any help.
tl;dr
There is no easy solution for what you want, since normal queries can't modify the fields they return. There is a solution (using the below mapReduce inline instead of doing an output to a collection), but except for very small databases, it is not possible to do this in realtime.
The problem
As written, a normal query can't really modify the fields it returns. But there are other problems. If you want to do a regex search in halfway decent time, you would have to index all fields, which would need a disproportional amount of RAM for that feature. If you wouldn't index all fields, a regex search would cause a collection scan, which means that every document would have to be loaded from disk, which would take too much time for autocompletion to be convenient. Furthermore, multiple simultaneous users requesting autocompletion would create considerable load on the backend.
The solution
The problem is quite similar to one I have already answered: We need to extract every word out of multiple fields, remove the stop words and save the remaining words together with a link to the respective document(s) the word was found in a collection. Now, for getting an autocompletion list, we simply query the indexed word list.
Step 1: Use a map/reduce job to extract the words
db.yourCollection.mapReduce(
// Map function
function() {
// We need to save this in a local var as per scoping problems
var document = this;
// You need to expand this according to your needs
var stopwords = ["the","this","and","or"];
for(var prop in document) {
// We are only interested in strings and explicitly not in _id
if(prop === "_id" || typeof document[prop] !== 'string') {
continue
}
(document[prop]).split(" ").forEach(
function(word){
// You might want to adjust this to your needs
var cleaned = word.replace(/[;,.]/g,"")
if(
// We neither want stopwords...
stopwords.indexOf(cleaned) > -1 ||
// ...nor string which would evaluate to numbers
!(isNaN(parseInt(cleaned))) ||
!(isNaN(parseFloat(cleaned)))
) {
return
}
emit(cleaned,document._id)
}
)
}
},
// Reduce function
function(k,v){
// Kind of ugly, but works.
// Improvements more than welcome!
var values = { 'documents': []};
v.forEach(
function(vs){
if(values.documents.indexOf(vs)>-1){
return
}
values.documents.push(vs)
}
)
return values
},
{
// We need this for two reasons...
finalize:
function(key,reducedValue){
// First, we ensure that each resulting document
// has the documents field in order to unify access
var finalValue = {documents:[]}
// Second, we ensure that each document is unique in said field
if(reducedValue.documents) {
// We filter the existing documents array
finalValue.documents = reducedValue.documents.filter(
function(item,pos,self){
// The default return value
var loc = -1;
for(var i=0;i<self.length;i++){
// We have to do it this way since indexOf only works with primitives
if(self[i].valueOf() === item.valueOf()){
// We have found the value of the current item...
loc = i;
//... so we are done for now
break
}
}
// If the location we found equals the position of item, they are equal
// If it isn't equal, we have a duplicate
return loc === pos;
}
);
} else {
finalValue.documents.push(reducedValue)
}
// We have sanitized our data, now we can return it
return finalValue
},
// Our result are written to a collection called "words"
out: "words"
}
)
Running this mapReduce against your example would result in db.words look like this:
{ "_id" : "can", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "canada", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "candid", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "candle", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "candy", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "cannister", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "canteen", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "canvas", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
Note that the individual words are the _id of the documents. The _id field is indexed automatically by MongoDB. Since indices are tried to be kept in RAM, we can do a few tricks to both speed up autocompletion and reduce the load put to the server.
Step 2: Query for autocompletion
For autocompletion, we only need the words, without the links to the documents.
Since the words are indexed, we use a covered query – a query answered only from the index, which usually resides in RAM.
To stick with your example, we would use the following query to get the candidates for autocompletion:
db.words.find({_id:/^can/},{_id:1})
which gives us the result
{ "_id" : "can" }
{ "_id" : "canada" }
{ "_id" : "candid" }
{ "_id" : "candle" }
{ "_id" : "candy" }
{ "_id" : "cannister" }
{ "_id" : "canteen" }
{ "_id" : "canvas" }
Using the .explain() method, we can verify that this query uses only the index.
{
"cursor" : "BtreeCursor _id_",
"isMultiKey" : false,
"n" : 8,
"nscannedObjects" : 0,
"nscanned" : 8,
"nscannedObjectsAllPlans" : 0,
"nscannedAllPlans" : 8,
"scanAndOrder" : false,
"indexOnly" : true,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"_id" : [
[
"can",
"cao"
],
[
/^can/,
/^can/
]
]
},
"server" : "32a63f87666f:27017",
"filterSet" : false
}
Note the indexOnly:true field.
Step 3: Query the actual document
Albeit we will have to do two queries to get the actual document, since we speed up the overall process, the user experience should be well enough.
Step 3.1: Get the document of the words collection
When the user selects a choice of the autocompletion, we have to query the complete document of words in order to find the documents where the word chosen for autocompletion originated from.
db.words.find({_id:"canteen"})
which would result in a document like this:
{ "_id" : "canteen", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
Step 3.2: Get the actual document
With that document, we can now either show a page with search results or, like in this case, redirect to the actual document which you can get by:
db.yourCollection.find({_id:ObjectId("553e435f20e6afc4b8aa0efb")})
Notes
While this approach may seem complicated at first (well, the mapReduce is a bit), it is actual pretty easy conceptually. Basically, you are trading real time results (which you won't have anyway unless you spend a lot of RAM) for speed. Imho, that's a good deal. In order to make the rather costly mapReduce phase more efficient, implementing Incremental mapReduce could be an approach – improving my admittedly hacked mapReduce might well be another.
Last but not least, this way is a rather ugly hack altogether. You might want to dig into elasticsearch or lucene. Those products imho are much, much more suited for what you want.
Thanks to #Markus solution, I came up with something similar with aggregations instead. Knowing that map-reduce are flagged as deprecated for later versions.
const { MongoDBNamespace, Collection } = require('mongodb')
//.replace(/(\b(\w{1,3})\b(\W|$))/g,'').split(/\s+/).join(' ')
const routine = `function (text) {
const stopwords = ['the', 'this', 'and', 'or', 'id']
text = text.replace(new RegExp('\\b(' + stopwords.join('|') + ')\\b', 'g'), '')
text = text.replace(/[;,.]/g, ' ').trim()
return text.toLowerCase()
}`
// If the pipeline includes the $out operator, aggregate() returns an empty cursor.
const agg = [
{
$match: {
a: true,
d: false,
},
},
{
$project: {
title: 1,
desc: 1,
},
},
{
$replaceWith: {
_id: '$_id',
text: {
$concat: ['$title', ' ', '$desc'],
},
},
},
{
$addFields: {
cleaned: {
$function: {
body: routine,
args: ['$text'],
lang: 'js',
},
},
},
},
{
$replaceWith: {
_id: '$_id',
text: {
$trim: {
input: '$cleaned',
},
},
},
},
{
$project: {
words: {
$split: ['$text', ' '],
},
qt: {
$const: 1,
},
},
},
{
$unwind: {
path: '$words',
includeArrayIndex: 'id',
preserveNullAndEmptyArrays: true,
},
},
{
$group: {
_id: '$words',
docs: {
$addToSet: '$_id',
},
weight: {
$sum: '$qt',
},
},
},
{
$sort: {
weight: -1,
},
},
{
$limit: 100,
},
{
$out: {
db: 'listings_db',
coll: 'words',
},
},
]
// Closure for db instance only
/**
*
* #param { MongoDBNamespace } db
*/
module.exports = function (db) {
/** #type { Collection } */
let collection
/**
* Runs the aggregation pipeline
* #return {Promise}
*/
this.refreshKeywords = async function () {
collection = db.collection('listing')
// .toArray() to trigger the aggregation
// it returns an empty curson so it's fine
return await collection.aggregate(agg).toArray()
}
}
Please check for very minimal changes for your convenience.