Unit Test to find a keyword in a response array - unit-testing

good day everyone, i am new here,
I have a response data looking like this
[
{
"Outlet": "Outlet1",
"Inventory": 12
},
{
"Outlet": "Outlet2",
"Inventory": 0
},
{
"Outlet": "Outlet3",
"Inventory": 3
},
{
"Outlet": "Outlet4",
"Inventory": 0
}
}
I need to verify if the outlet 1 inventory is exact 12, and every other data EXCEPT outlet1 inventory is 0. do i need to loop the test?
I’ve already tried:
pm.test("Inventory.OnHand Outlet1 == 12", () => {
let Outlet1Result = jsonData.find(a => a.Outlet === "Outlet1")
pm.expect(Outlet1Result.Outlet).to.eql("Outlet1")
pm.expect(Outlet1Result.Inventory).to.eql(12)
});
pm.test("Inventory.OnHand not Outlet1 == 0", () => {
if (jsonData.Outlet !== "Outlet1") {
jsonData.forEach(function() {
let result2 = jsonData.find(a => a.Outlet !== "Outlet1")
pm.expect(result2.Inventory).to.eql(0)
}) ;
}
});
I have tried using this 2 test, the first test worked just fine, but i think the second test is wrong because it’s passed, it should not be passed since outlet 3 inventory is 3, the text expect it to be 0

Other way would be :
pm.test("Validate inventory values", () => {
jsonData.forEach(function (item) {
if (item.Outlet !== "Outlet1") {
pm.expect(item.Inventory,
`Expected inventory value of ${item.Outlet} to be 0`).
to.
eql(0)
} else {
pm.expect(item.Inventory,
`Expected inventory value of ${item.Outlet} to be 12`).
to.
eql(12)
}
})
});

why can't you just compare the object as it is than individual fields ? as you are comparing exact values.
let expected = [
{
"Outlet": "Outlet1",
"Inventory": 12
},
{
"Outlet": "Outlet2",
"Inventory": 0
},
{
"Outlet": "Outlet3",
"Inventory": 3
},
{
"Outlet": "Outlet4",
"Inventory": 0
}
]
pm.expect(JSONdata).to.be.deep.eq(expected1, `Expected ${JSON.stringify(JSONdata,null,2)} to be equal to ${JSON.stringify(expected,null,2)}`)

Related

In postman tests, how can I find if a value is set where another value is equal to something?

Example:
[
{
"id": 1,
"value": 1000,
},
{
"id": 2,
"value": 500,
},
]
I want to basically say check that value is 1000 where id = 1.
The code:
pm.test("Check value is correct", function () {
const responseJson = pm.response.json();
pm.expect(responseJson.value = 1000);
pm.expect(responseJson.id = 1);
});
Is that the correct way to do that test? Or is that going to check both is valid?
responseJson is an array, so it is not going to work, because you are not accessing any array element. Always try your code first. There are other problems, e.g. pm.expect(responseJson.value = 1000); is not gonna work, you have to chain the checks, this syntax is incorrect.
You can filter based on id and check the value then:
pm.test("Check value is correct", function () {
const responseJson = pm.response.json();
const [filteredObject] = responseJson.filter(el => el.id === 1);
pm.expect(filteredObject.value).to.eql(1000);
});
I recommend reading test examples in Postman docs.

Dart - find duplicate values on a List

how can I find duplicate values on a list,
Let's say I got a List like this:
List<Map<String, dynamic>> users = [
{ "name": 'John', 'age': 18 },
{ "name": 'Jane', 'age': 21 },
{ "name": 'Mary', 'age': 23 },
{ "name": 'Mary', 'age': 27 },
];
How I can iterate the list to know if there are users with the same name?
A simple way would be this:
void main() {
List<Map<String, dynamic>> users = [
{ "name": 'John', 'age': 18 },
{ "name": 'Jane', 'age': 21 },
{ "name": 'Mary', 'age': 23 },
{ "name": 'Mary', 'age': 27 },
];
List names = []; // List();
users.forEach((u){
if (names.contains(u["name"])) print("duplicate ${u["name"]}");
else names.add(u["name"]);
});
}
Result:
duplicate Mary
Probably a cleaner solution with extensions.
By declaring:
extension ListExtensions<E> on List<E> {
List<E> removeAll(Iterable<E> allToRemove) {
if (allToRemove == null) {
return this;
} else {
allToRemove.forEach((element) {
this.remove(element);
});
return this;
}
}
List<E> getDupes() {
List<E> dupes = List.from(this);
dupes.removeAll(this.toSet().toList());
return dupes;
}
}
then you can find your duplicates by calling List.getDupes()
Note that the function removeAll doesn't exist in my current Dart library, in case you're reading this when they implement it somehow.
Also keep in mind the equals() function. In a List<String>, ["Rafa", "rafa"] doesn't contain duplicates.
If you indeed want to achieve this level of refinement, you'd have to apply a distinctBy function:
extension ListExtensions<E> on List<E> {
List<E> removeAll(Iterable<E> allToRemove) {
if (allToRemove == null) {
return this;
} else {
allToRemove.forEach((element) {
this.remove(element);
});
return this;
}
}
List<E> distinctBy(predicate(E selector)) {
HashSet set = HashSet();
List<E> list = [];
toList().forEach((e) {
dynamic key = predicate(e);
if (set.add(key)) {
list.add(e);
}
});
return list;
}
List<E> getDupes({E Function(E) distinctBy}) {
List<E> dupes = List.from(this);
if (distinctBy == null) {
dupes.removeAll(this.toSet().toList());
} else {
dupes.removeAll(this.distinctBy(distinctBy).toSet().toList());
}
return dupes;
}
}
I had a feeling Rafael's answer had code similar to Kotlin so I dug around and saw that these functions are part of the kt_dart library which basically gets the Kotlin standard library and ports it to Dart.
I come from a Kotlin background so I use this package often. If you use it, you can simply make the extension this much shorter:
extension KtListExtensions<T> on KtList<T> {
KtList<T> get duplicates => toMutableList()..removeAll(toSet().toList());
}
just make sure to add kt_dart on your pubspec: kt_dart: ^0.8.0
Example
final list = ['apples', 'oranges', 'bananas', 'apples'].toImmutableList();
final duplicates = list.duplicates; // should be ['apples'] in the form of an ImmutableList<String>
void main() {
List<String> country = [
"Nepal",
"Nepal",
"USA",
"Canada",
"Canada",
"China",
"Russia",
];
List DupCountry = [];
country.forEach((dup){
if(DupCountry.contains(dup)){
print("Duplicate in List= ${dup}");
}
else{
DupCountry.add(dup);
}
});
}

Swift Storing Appending multiple Dictionary into array

I want to store multiple dictionary into an array so that the final results looks like so
(
{
id: 12,
task : completed
},
{
id: 15,
task : error
},
{
id: 17,
task : pending
},
)
I tried with code below but it does not give me what I want Please can someone help me out. Thanks
var FinalTaskData = [[String:AnyObject]]()
for i in 0..<taskObj.count{
let dict = ["id":taskObj[i].id!,"task":taskObj[i].task!] as [String : AnyObject]
FinalTaskData.append(dict)
}
And this gives me the output of
(
{
id = 190;
},
{
task = "Task To Be Edited";
},
{
id = 191;
},
{
task = "Also To Be Edited";
}
)
Which is not what I want. Thanks

Using Elastic Search Geo Functionality To Find Most Common Locations?

I have a geojson file containing a list of locations each with a longitude, latitude and timestamp. Note the longitudes and latitudes are multiplied by 10000000.
{
"locations" : [ {
"timestampMs" : "1461820561530",
"latitudeE7" : -378107308,
"longitudeE7" : 1449654070,
"accuracy" : 35,
"junk_i_want_to_save_but_ignore" : [ { .. } ]
}, {
"timestampMs" : "1461820455813",
"latitudeE7" : -378107279,
"longitudeE7" : 1449673809,
"accuracy" : 33
}, {
"timestampMs" : "1461820281089",
"latitudeE7" : -378105184,
"longitudeE7" : 1449254023,
"accuracy" : 35
}, {
"timestampMs" : "1461820155814",
"latitudeE7" : -378177434,
"longitudeE7" : 1429653949,
"accuracy" : 34
}
..
Many of these locations will be the same physical location (e.g. the user's home) but obviously the longitude and latitudes may not be exactly the same.
I would like to use Elastic Search and it's Geo functionality to produce a ranked list of most common locations where locations are deemed to be the same if they are within, say, 100m of each other?
For each common location I'd also like the list of all timestamps they were at that location if possible!
I'd very much appreciate a sample query to get me started!
Many thanks in advance.
In order to make it work you need to modify your mapping like this:
PUT /locations
{
"mappings": {
"location": {
"properties": {
"location": {
"type": "geo_point"
},
"timestampMs": {
"type": "long"
},
"accuracy": {
"type": "long"
}
}
}
}
}
Then, when you index your documents, you need to divide the latitude and longitude by 10000000, and index like this:
PUT /locations/location/1
{
"timestampMs": "1461820561530",
"location": {
"lat": -37.8103308,
"lon": 14.4967407
},
"accuracy": 35
}
Finally, your search query below...
POST /locations/location/_search
{
"aggregations": {
"zoomedInView": {
"filter": {
"geo_bounding_box": {
"location": {
"top_left": "-37, 14",
"bottom_right": "-38, 15"
}
}
},
"aggregations": {
"zoom1": {
"geohash_grid": {
"field": "location",
"precision": 6
},
"aggs": {
"ts": {
"date_histogram": {
"field": "timestampMs",
"interval": "15m",
"format": "DDD yyyy-MM-dd HH:mm"
}
}
}
}
}
}
}
}
...will yield the following result:
{
"aggregations": {
"zoomedInView": {
"doc_count": 1,
"zoom1": {
"buckets": [
{
"key": "k362cu",
"doc_count": 1,
"ts": {
"buckets": [
{
"key_as_string": "Thu 2016-04-28 05:15",
"key": 1461820500000,
"doc_count": 1
}
]
}
}
]
}
}
}
}
UPDATE
According to our discussion, here is a solution that could work for you. Using Logstash, you can call your API and retrieve the big JSON document (using the http_poller input), extract/transform all locations and sink them to Elasticsearch (with the elasticsearch output) very easily.
Here is how it goes in order to format each event as depicted in my initial answer.
Using http_poller you can retrieve the JSON locations (note that I've set the polling interval to 1 day, but you can change that to some other value, or simply run Logstash manually each time you want to retrieve the locations)
Then we split the locations array into individual events
Then we divide the latitude/longitude fields by 10,000,000 to get proper coordinates
We also need to clean it up a bit by moving and removing some fields
Finally, we just send each event to Elasticsearch
Logstash configuration locations.conf:
input {
http_poller {
urls => {
get_locations => {
method => get
url => "http://your_api.com/locations.json"
headers => {
Accept => "application/json"
}
}
}
request_timeout => 60
interval => 86400000
codec => "json"
}
}
filter {
split {
field => "locations"
}
ruby {
code => "
event['location'] = {
'lat' => event['locations']['latitudeE7'] / 10000000.0,
'lon' => event['locations']['longitudeE7'] / 10000000.0
}
"
}
mutate {
add_field => {
"timestampMs" => "%{[locations][timestampMs]}"
"accuracy" => "%{[locations][accuracy]}"
"junk_i_want_to_save_but_ignore" => "%{[locations][junk_i_want_to_save_but_ignore]}"
}
remove_field => [
"locations", "#timestamp", "#version"
]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "locations"
document_type => "location"
}
}
You can then run with the following command:
bin/logstash -f locations.conf
When that has run, you can launch your search query and you should get what you expect.

Implement auto-complete feature using MongoDB search

I have a MongoDB collection of documents of the form
{
"id": 42,
"title": "candy can",
"description": "canada candy canteen",
"brand": "cannister candid",
"manufacturer": "candle canvas"
}
I need to implement auto-complete feature based on the input search term by matching in the fields except id. For example, if the input term is can, then I should return all matching words in the document as
{ hints: ["candy", "can", "canada", "canteen", ...]
I looked at this question but it didn't help. I also tried searching how to do regex search in multiple fields and extract matching tokens, or extracting matching tokens in a MongoDB text search but couldn't find any help.
tl;dr
There is no easy solution for what you want, since normal queries can't modify the fields they return. There is a solution (using the below mapReduce inline instead of doing an output to a collection), but except for very small databases, it is not possible to do this in realtime.
The problem
As written, a normal query can't really modify the fields it returns. But there are other problems. If you want to do a regex search in halfway decent time, you would have to index all fields, which would need a disproportional amount of RAM for that feature. If you wouldn't index all fields, a regex search would cause a collection scan, which means that every document would have to be loaded from disk, which would take too much time for autocompletion to be convenient. Furthermore, multiple simultaneous users requesting autocompletion would create considerable load on the backend.
The solution
The problem is quite similar to one I have already answered: We need to extract every word out of multiple fields, remove the stop words and save the remaining words together with a link to the respective document(s) the word was found in a collection. Now, for getting an autocompletion list, we simply query the indexed word list.
Step 1: Use a map/reduce job to extract the words
db.yourCollection.mapReduce(
// Map function
function() {
// We need to save this in a local var as per scoping problems
var document = this;
// You need to expand this according to your needs
var stopwords = ["the","this","and","or"];
for(var prop in document) {
// We are only interested in strings and explicitly not in _id
if(prop === "_id" || typeof document[prop] !== 'string') {
continue
}
(document[prop]).split(" ").forEach(
function(word){
// You might want to adjust this to your needs
var cleaned = word.replace(/[;,.]/g,"")
if(
// We neither want stopwords...
stopwords.indexOf(cleaned) > -1 ||
// ...nor string which would evaluate to numbers
!(isNaN(parseInt(cleaned))) ||
!(isNaN(parseFloat(cleaned)))
) {
return
}
emit(cleaned,document._id)
}
)
}
},
// Reduce function
function(k,v){
// Kind of ugly, but works.
// Improvements more than welcome!
var values = { 'documents': []};
v.forEach(
function(vs){
if(values.documents.indexOf(vs)>-1){
return
}
values.documents.push(vs)
}
)
return values
},
{
// We need this for two reasons...
finalize:
function(key,reducedValue){
// First, we ensure that each resulting document
// has the documents field in order to unify access
var finalValue = {documents:[]}
// Second, we ensure that each document is unique in said field
if(reducedValue.documents) {
// We filter the existing documents array
finalValue.documents = reducedValue.documents.filter(
function(item,pos,self){
// The default return value
var loc = -1;
for(var i=0;i<self.length;i++){
// We have to do it this way since indexOf only works with primitives
if(self[i].valueOf() === item.valueOf()){
// We have found the value of the current item...
loc = i;
//... so we are done for now
break
}
}
// If the location we found equals the position of item, they are equal
// If it isn't equal, we have a duplicate
return loc === pos;
}
);
} else {
finalValue.documents.push(reducedValue)
}
// We have sanitized our data, now we can return it
return finalValue
},
// Our result are written to a collection called "words"
out: "words"
}
)
Running this mapReduce against your example would result in db.words look like this:
{ "_id" : "can", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "canada", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "candid", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "candle", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "candy", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "cannister", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "canteen", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
{ "_id" : "canvas", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
Note that the individual words are the _id of the documents. The _id field is indexed automatically by MongoDB. Since indices are tried to be kept in RAM, we can do a few tricks to both speed up autocompletion and reduce the load put to the server.
Step 2: Query for autocompletion
For autocompletion, we only need the words, without the links to the documents.
Since the words are indexed, we use a covered query – a query answered only from the index, which usually resides in RAM.
To stick with your example, we would use the following query to get the candidates for autocompletion:
db.words.find({_id:/^can/},{_id:1})
which gives us the result
{ "_id" : "can" }
{ "_id" : "canada" }
{ "_id" : "candid" }
{ "_id" : "candle" }
{ "_id" : "candy" }
{ "_id" : "cannister" }
{ "_id" : "canteen" }
{ "_id" : "canvas" }
Using the .explain() method, we can verify that this query uses only the index.
{
"cursor" : "BtreeCursor _id_",
"isMultiKey" : false,
"n" : 8,
"nscannedObjects" : 0,
"nscanned" : 8,
"nscannedObjectsAllPlans" : 0,
"nscannedAllPlans" : 8,
"scanAndOrder" : false,
"indexOnly" : true,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"_id" : [
[
"can",
"cao"
],
[
/^can/,
/^can/
]
]
},
"server" : "32a63f87666f:27017",
"filterSet" : false
}
Note the indexOnly:true field.
Step 3: Query the actual document
Albeit we will have to do two queries to get the actual document, since we speed up the overall process, the user experience should be well enough.
Step 3.1: Get the document of the words collection
When the user selects a choice of the autocompletion, we have to query the complete document of words in order to find the documents where the word chosen for autocompletion originated from.
db.words.find({_id:"canteen"})
which would result in a document like this:
{ "_id" : "canteen", "value" : { "documents" : [ ObjectId("553e435f20e6afc4b8aa0efb") ] } }
Step 3.2: Get the actual document
With that document, we can now either show a page with search results or, like in this case, redirect to the actual document which you can get by:
db.yourCollection.find({_id:ObjectId("553e435f20e6afc4b8aa0efb")})
Notes
While this approach may seem complicated at first (well, the mapReduce is a bit), it is actual pretty easy conceptually. Basically, you are trading real time results (which you won't have anyway unless you spend a lot of RAM) for speed. Imho, that's a good deal. In order to make the rather costly mapReduce phase more efficient, implementing Incremental mapReduce could be an approach – improving my admittedly hacked mapReduce might well be another.
Last but not least, this way is a rather ugly hack altogether. You might want to dig into elasticsearch or lucene. Those products imho are much, much more suited for what you want.
Thanks to #Markus solution, I came up with something similar with aggregations instead. Knowing that map-reduce are flagged as deprecated for later versions.
const { MongoDBNamespace, Collection } = require('mongodb')
//.replace(/(\b(\w{1,3})\b(\W|$))/g,'').split(/\s+/).join(' ')
const routine = `function (text) {
const stopwords = ['the', 'this', 'and', 'or', 'id']
text = text.replace(new RegExp('\\b(' + stopwords.join('|') + ')\\b', 'g'), '')
text = text.replace(/[;,.]/g, ' ').trim()
return text.toLowerCase()
}`
// If the pipeline includes the $out operator, aggregate() returns an empty cursor.
const agg = [
{
$match: {
a: true,
d: false,
},
},
{
$project: {
title: 1,
desc: 1,
},
},
{
$replaceWith: {
_id: '$_id',
text: {
$concat: ['$title', ' ', '$desc'],
},
},
},
{
$addFields: {
cleaned: {
$function: {
body: routine,
args: ['$text'],
lang: 'js',
},
},
},
},
{
$replaceWith: {
_id: '$_id',
text: {
$trim: {
input: '$cleaned',
},
},
},
},
{
$project: {
words: {
$split: ['$text', ' '],
},
qt: {
$const: 1,
},
},
},
{
$unwind: {
path: '$words',
includeArrayIndex: 'id',
preserveNullAndEmptyArrays: true,
},
},
{
$group: {
_id: '$words',
docs: {
$addToSet: '$_id',
},
weight: {
$sum: '$qt',
},
},
},
{
$sort: {
weight: -1,
},
},
{
$limit: 100,
},
{
$out: {
db: 'listings_db',
coll: 'words',
},
},
]
// Closure for db instance only
/**
*
* #param { MongoDBNamespace } db
*/
module.exports = function (db) {
/** #type { Collection } */
let collection
/**
* Runs the aggregation pipeline
* #return {Promise}
*/
this.refreshKeywords = async function () {
collection = db.collection('listing')
// .toArray() to trigger the aggregation
// it returns an empty curson so it's fine
return await collection.aggregate(agg).toArray()
}
}
Please check for very minimal changes for your convenience.