MongoDB: Insert does not modify a list of documents? - list

Adding new value in a list with insert completes ok but document remains unmodified:
> graph = {graph:[
... {_id:1, links: [2,3,4]},
... {_id:2, links: [5,6]},
... {_id:3, links: [7]},
... {_id:4, links: [9,10]}
... ]}
{
"graph" : [
{
"_id" : 1,
"links" : [
2,
3,
4
]
},
{
"_id" : 2,
"links" : [
5,
6
]
},
{
"_id" : 3,
"links" : [
7
]
},
{
"_id" : 4,
"links" : [
9,
10
]
}
]
}
> db.test.insert(graph)
WriteResult({ "nInserted" : 1 })
> db.runCommand(
... {
... insert: "graph",
... documents: [ {_id:5, links: [1,8]} ]
... }
... )
{ "ok" : 1, "n" : 1 }
Yet getting elements after insert does not have a new inserted element:
> db.test.find()
{ "_id" : ObjectId("538c8586562938c6afce9924"), "graph" : [
{ "_id" : 1, "links" : [ 2, 3, 4 ] },
{ "_id" : 2, "links" : [ 5, 6 ] },
{ "_id" : 3, "links" : [ 7 ] },
{ "_id" : 4, "links" : [ 9, 10 ] } ] }
>
What's wrong?
Update
> db.test.find()
{ "_id" : ObjectId("538c8586562938c6afce9924"), "graph" : [ { "_id" : 1, "links" : [ 2, 3, 4 ] }, { "_id" : 2, "links" : [ 5, 6 ] }, { "_id" : 3, "links" : [ 7 ] }, { "_id" : 4, "links" : [ 9, 10 ] } ] }
>
db.test.update(
{_id : 538c8586562938c6afce9924},
{$push : {
graph : {{_id:5, links: [1,8]}}
}
}
);
2014-06-03T12:35:11.695+0400 SyntaxError: Unexpected token {

You have to update the collection. Using db.runCommand() is not exactly the right way. It is essentially used for working in the context of database for performing tasks that extend to authentication, user management, role management and replication etc. The complete usability of db.runCommand() can be seen in MongoDB docs here.
The simplest way is using the update on collection. One of the reason why this query may not work for you is not supplying the _id to MongoDB in proper way.
_id : "ObjectId("538dcfaf8f00ec71aa055b15")" or _id : "538dcfaf8f00ec71aa055b15" are not same as _id : ObjectId("538dcfaf8f00ec71aa055b15") for MongoDB. The first and second ones are string type while the last one is an ObjectId type.
db.test.update(
{_id : ObjectId("538dcfaf8f00ec71aa055b15")}, //i.e. query for finding the doc to update
{$push : {
graph : {{_id:5, links: [1,8]}}
}
}, //performing the update
{} //optional parameters
);
But if you insist on using db.runCommand() then to get this same thing done you will need to write it as follows:
db.runCommand ({
update: 'Test',
updates:
[
{
q: {_id : ObjectId("538dcfaf8f00ec71aa055b15")},
u: {$push : {'graph' : {_id:5, links: [1,8]}}}
}
]
});
What you have done is inserting something in a "graph" collection! You can do a show collections on your db and you will find that as a result of your incorrect db.runCommand(...) you ended up creating a new collection.
Hope it helped.

Related

Join two queryset in Django ORM

I'm writing a view that returns this:
[
{
"field_id" : 1,
"stock" : [
{
"size" : "M",
"total" : 3
}
],
"reserved" : [
{
"size" : "M",
"total" : 1
}
]
},
{
"field_id" : 2,
"stock" : [
{
"size" : "M",
"total" : 2
},
{
"size" : "PP",
"total" : 2
}
],
"reserved" : [
{
"size" : "PP",
"total" : 1
},
{
"size" : "M",
"total" : 2
}
]
}
]
For this result, I used values and annotation(django orm):
reserved = Reserved.objects.all().values("size").annotate(total=Count("size")).order_by("total")
stock = Stock.objects.filter(amount=0).values('size').annotate(total=Count('size')).order_by('total'))
It's ok for me, but I would like put the reserved queryset inside stock. Like this:
[
{
"field_id" : 1,
"stock" : [
{
"size" : "M",
"total" : 3,
"reserved": 1
}
],
},
{
"field_id" : 2,
"stock" : [
{
"size" : "M",
"total" : 2,
"reserved": 1
},
{
"size" : "PP",
"total" : 2,
"reserved": 0
}
],
}
]
It's possibile? Reserved and Stock doesn't relationship.

Kotlin - group by list of Maps

I have a fieldList variable.
val fieldList: List<MutableMap<String, String>>
// fieldList Data :
[ {
"field_id" : "1",
"section_id" : "1",
"section_name" : "section1",
"field_name" : "something_1"
}, {
"field_id" : "2",
"section_id" : "1",
"section_name" : "section1",
"field_name" : "something_2"
}, {
"field_id" : "3",
"section_id" : "2",
"section_name" : "section2",
"field_name" : "something_3"
}, {
"field_id" : "4",
"section_id" : "3",
"section_name" : "section3",
"field_name" : "something_4"
} ]
And I want to group by section_id.
The results should be as follows:
val result: List<MutableMap<String, Any>>
// result Data :
[
{
"section_id": "1",
"section_name": "section1",
"field": [
{
"id": “1”,
"name": "something_1"
},
{
"id": “2”,
"name": "something_2"
}
]
},
{
"section_id": "2",
"section_name": "section2",
"field": [
{
"id": “3”,
"name": "something_3"
}
]
},
.
.
.
]
What is the most idiomatic way of doing this in Kotlin?
I have an ugly looking working version in Java, but I am quite sure Kotlin has a nice way of doing it..
it's just that I am not finding it so far !
Any idea ?
Thanks
Another way:
val newList = originalList.groupBy { it["section_id"] }.values
.map {
mapOf(
"section_id" to it[0]["section_id"]!!,
"section_name" to it[0]["section_name"]!!,
"field" to it.map { mapOf("id" to it["field_id"], "name" to it["field_name"]) }
)
}
Playground
Also, as broot mentioned, prefer using data classes instead of such maps.
Assuming we are guaranteed that the data is correct and we don't have to validate it, so:
all fields always exist,
section_name is always the same for a specific section_id.
This is how you can do this:
val result = fieldList.groupBy(
keySelector = { it["section_id"]!! to it["section_name"]!! },
valueTransform = {
mutableMapOf(
"id" to it["field_id"]!!,
"name" to it["field_name"]!!,
)
}
).map { (section, fields) ->
mutableMapOf(
"section_id" to section.first,
"section_name" to section.second,
"field" to fields
)
}
However, I suggest not using maps and lists, but proper data classes. Using a Map to store known properties and using Any to store either String or List is just very inconvenient to use and error-prone.

extract a value from a googlemaps JSON response

My JSON_Respon from googlemap API give
%{ body: body} = HTTPoison.get! url
body = {
"geocoded_waypoints" : [{ ... },{ ... }],
"routes" : [{
"bounds" : { ...},
"copyrights" : "Map data ©2018 Google",
"legs" : [
{
"distance" : {
"text" : "189 km",
"value" : 188507
},
"duration" : {
"text" : "2 hours 14 mins",
"value" : 8044
},
"end_address" : "Juhan Liivi 2, 50409 Tartu, Estonia",
"end_location" : {
"lat" : 58.3785389,
"lng" : 26.7146963
},
"start_address" : "J. Sütiste tee 44, 13420 Tallinn, Estonia",
"start_location" : {
"lat" : 59.39577569999999,
"lng" : 24.6861104
},
"steps" : [
{ ... },
{ ... },
{ ... },
{ ... },
{
"distance" : {
"text" : "0.9 km",
"value" : 867
},
"duration" : {
"text" : "2 mins",
"value" : 104
},
"end_location" : {
"lat" : 59.4019886,
"lng" : 24.7108114
},
"html_instructions" : "XXXX",
"maneuver" : "turn-left",
"polyline" : {
"points" : "XXXX"
},
"start_location" : {
"lat" : 59.3943677,
"lng" : 24.708647
},
"travel_mode" : "DRIVING"
},
{ ... },
{ ... },
{ ... },
{ ... },
{ ... },
{ ... },
{ ... },
{ ... },
{ ... }
],
"traffic_speed_entry" : [],
"via_waypoint" : []
}
],
"overview_polyline" : { ... },
"summary" : "Tallinn–Tartu–Võru–Luhamaa/Route 2",
"warnings" : [],
"waypoint_order" : []
}
],
"status" : "OK"
}
(check the attached image)
in red what I'm getting with with bellow command from Regex.named_captures module
%{"duration_text" => duration_text, "duration_value" => duration_value} = Regex.named_captures ~r/duration\D+(?<duration_text>\d+ mins)\D+(?<duration_value>\d+)/, body
in bleu (check the attached image), what I want to extract from body
body is the JSON response of my googleAPI url on a browser
Would you please assist and provide the regex ?
Since http://www.elixre.uk/ is down, i'm cant find any api helping to do that
Thanks in advance
Don't use regexes on a json string. Instead, convert the json string to an elixir map using Jason, Poison, etc., then use the keys in the map to lookup the data you are interested in.
Here's an example:
json_map = Jason.decode!(get_json())
[first_route | _rest] = json_map["routes"]
[first_leg | _rest] = first_route["legs"]
distance = first_leg["distance"]
=> %{"text" => "189 km", "value" => 188507}
Similarly, you can get the other parts with:
duration = first_leg["duration"]
end_address = first_leg["end_address"]
...
...

using $regex in mongodb aggregation framework in $group

Consider the following example:
db.article.aggregate(
{ $group : {
_id : "$author",
docsPerAuthor : { $sum : 1 },
viewsPerAuthor : { $sum : "$pageViews" }
}}
);
This groups by the author field and computes two fields.
I have values for $author = FirstName_LastName.
Now instead of grouping by $author, I want to group by all authors who share the same LastName.
I tried $regex to group by all matching strings after the '_'
$author.match(/_[a-zA-Z0-9]+$/)
db.article.aggregate(
{ $group : {
_id : "$author".match(/_[a-zA-Z0-9]+$/),
docsPerAuthor : { $sum : 1 },
viewsPerAuthor : { $sum : "$pageViews" }
}}
);
also tried the following:
db.article.aggregate(
{ $group : {
_id : {$author: {$regex: /_[a-zA-Z0-9]+$/}},
docsPerAuthor : { $sum : 1 },
viewsPerAuthor : { $sum : "$pageViews" }
}}
);
Actually there is no such method which provides this kind of functionality or i could not find the appropriate version which contains it. That will not work with $regexp i think : http://docs.mongodb.org/manual/reference/operator/regex/ it is just for pattern matching.
There is an improvement request in the jira : https://jira.mongodb.org/browse/SERVER-6773
It is in open unresolved state.
BUT
in github i found this disscussion: https://github.com/mongodb/mongo/pull/336
And if you check this commit: https://github.com/nleite/mongo/commit/2dd175a5acda86aaad61f5eb9dab83ee19915709
it contains more or less exactly the method you likely to have. I do not really get the point of the state of this improvement: in 2.2.3 it is not working .
Use mapReduce: it is the general form of aggregation. This is how to proceed in mongo shell:
Define the map function
var mapFunction = function() {
var key = this.author.match(/_[a-zA-Z0-9]+$/)[0];
var nb_match_bar2 = 0;
if( this.bar.match(/bar2/g) ){
nb_match_bar2 = 1;
}
var value = {
docsPerAuthor: 1,
viewsPerAuthor: Array.sum(this.pageViews)
};
emit( key, value );
};
and the reduce function
var reduceFunction = function(key, values) {
var reducedObject = {
_id: key,
docsPerAuthor: 0,
viewsPerAuthor: 0
};
values.forEach( function(value) {
reducedObject.docsPerAuthor += value.docsPerAuthor;
reducedObject.viewsPerAuthor += value.viewsPerAuthor;
}
);
return reducedObject;
};
run mapReduce and save the result in map_reduce_result
>db.st.mapReduce(mapFunction, reduceFunction, {out:'map_reduce_result'})
query map_reduce_result to have the result
>db.map_reduce_result.find()
A possible workaround with the aggregation framework consists in using $project to compute the author name. However, it is dirty as you need to manually loop through the different first name sizes:
Here, we compute the field name as the substring after the '_' character, trying each of its possible position (this is why there is a chain of $cond), and fallbacking in returning the whole $author if the first name is too long:
http://mongotry.herokuapp.com/#?bookmarkId=52fb5f24a0378802003b4c68
[
{
"$project": {
"author": 1,
"pageViews": 1,
"name": {
"$cond": [
{
"$eq": [
{
"$substr": [
"$author",
0,
1
]
},
"_"
]
},
{
"$substr": [
"$author",
1,
999
]
},
{
"$cond": [
{
"$eq": [
{
"$substr": [
"$author",
1,
1
]
},
"_"
]
},
{
"$substr": [
"$author",
2,
999
]
},
{
"$cond": [
{
"$eq": [
{
"$substr": [
"$author",
2,
1
]
},
"_"
]
},
{
"$substr": [
"$author",
3,
999
]
},
{
"$cond": [
{
"$eq": [
{
"$substr": [
"$author",
3,
1
]
},
"_"
]
},
{
"$substr": [
"$author",
4,
999
]
},
{
"$cond": [
{
"$eq": [
{
"$substr": [
"$author",
4,
1
]
},
"_"
]
},
{
"$substr": [
"$author",
5,
999
]
},
"$author"
]
}
]
}
]
}
]
}
]
}
}
},
{
"$group": {
"_id": "$name",
"viewsPerAuthor": {
"$sum": "$pageViews"
}
}
}
]
$group combining $addFields and $arrayElemAt works for me (version ≥ 3.4).
Say we have following data in collection faculty, database school:
{ "_id" : ObjectId("5ed5a59b1febc4c796a88e80"), "name" : "Harry_Potter" }
{ "_id" : ObjectId("5ed5a60e1febc4c796a88e81"), "name" : "Edison_Potter" }
{ "_id" : ObjectId("5ed5a6231febc4c796a88e82"), "name" : "Jack_Potter" }
{ "_id" : ObjectId("5ed5a62f1febc4c796a88e83"), "name" : "Alice_Walker" }
{ "_id" : ObjectId("5ed5a65f1febc4c796a88e84"), "name" : "Bob_Walker" }
{ "_id" : ObjectId("5ed5a6731febc4c796a88e85"), "name" : "Will_Smith" }
Following can group each document by the last name:
db.faculty.aggregate([
{
$addFields: {
lastName: {
$arrayElemAt: [ { $split: ["$name", "_"] }, 1 ]
}
}
},
{
$group: {
_id: "$lastName",
count: {$sum: 1}
}
}
])
Running result is:
{ "_id" : "Potter", "count" : 3 }
{ "_id" : "Walker", "count" : 2 }
{ "_id" : "Smith", "count" : 1 }
The trick I used is to add a field named lastName. Based on what you have for the name field, it can be split into an array by _. Last name is at index 1 and first name at index 0.
Reference
$addFields (aggregation)
$arrayElemAt (aggregation)

Mongodb complex regex queries

I have collection of cities like this
{ "name": "something","population":2121}
there are thousands of documents like this in one collection
now, I have created index like this
$coll->ensureIndex(array("name" => 1, "population" => -1),
array("background" => true));
now I want to query like this
$cities = $coll->find(array("name" => array('$regex' => "^$name")))
->limit(30)
->sort(array("name" => 1, "population" => -1));
But this returns cities in ascending order of population. But I want result as descending order of population i.e. highest population first.
Any idea???
EDIT: I have created individual indexes on name and population. Following is output of db.city_info.getIndexes() and db.city_info.find({ "name": { "$regex": "^Ban" } }).sort({ "population": -1 }).explain(); respectively
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "city_database.city_info",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"name" : 1
},
"ns" : "city_database.city_info",
"background" : 1,
"name" : "ascii_1"
},
{
"v" : 1,
"key" : {
"population" : 1
},
"ns" : "city_database.city_info",
"background" : 1,
"name" : "population_1"
}
]
and
{
"cursor" : "BtreeCursor ascii_1 multi",
"nscanned" : 70739,
"nscannedObjects" : 70738,
"n" : 70738,
"scanAndOrder" : true,
"millis" : 17461,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
"name" : [
[
"Ban",
"Bao"
],
[
/^Ban/,
/^Ban/
]
]
}
}
Just look at time taken by query :-(
If you want the results to be in descending order of population (greatest to least) then remove the sort on name within the query.
my is too short has the right idea
When you sort on name and then descending population, what you have now, it sorts by name, which is most likely unique-ish because we are talking about cities, and then by population.
Also, make sure you have an index on name:
db.cities.ensureIndex({population: 1})
Direction doesn't matter when the index is on one field.
Update (sample of similar index, query and explain):
> db.test.insert({name: "New York", population: 5000})
> db.test.insert({name: "Longdon", population: 7000})
> db.test.ensureIndex({name: 1})
> db.test.find({name: {"$regex": "^New"}}).sort({poplation: -1})
{ "_id" : ObjectId("4f0ff70072999b69b616d2b6"), "name" : "New York", "population" : 5000 }
> db.test.find({name: {"$regex": "^New"}}).sort({poplation: -1}).explain()
{
"cursor" : "BtreeCursor name_1 multi",
"nscanned" : 1,
"nscannedObjects" : 1,
"n" : 1,
"scanAndOrder" : true,
"millis" : 1,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
"name" : [
[
"New",
"Nex"
],
[
/^New/,
/^New/
]
]
}
}