I'am having some problems with boost json in c++ - c++

I see that have a lot of questions very similar to mine, but I donĀ“t see any solutions that fit with my problem.
I' am trying to create a JSON with boost library with the below structure:
{
"event_date": "2018-06-11T09:35:48.867Z",
"event_log": "2018-06-11 09:35:43,253 - recycler [TRACE]: Running recycler::WITHDRAW",
"cassettes": [
{
"value" : "0",
"currency": "BRL",
"CDMType" : "WFS_CDM_TYPEREJECTCASSETTE",
"lppPhysical" : [
{
"positionName" : "BIN1A",
"count" : "3"
}
]
},
{.....},{.....}
]
}
Below we will have the code that I have now:
boost::property_tree::ptree children, children2, child, child1, child2, child3, child4, child5, cassettes;
child1.put("value", "cash_unit->ulValues");
child2.put("currency", "std::string(cash_unit->cCurrencyID).substr(0, 3)");
child3.put("CDMType", "cash_unit->usType");
child4.put("lppPhysical.positionName", "ph_unit->lpPhysicalPositionName");
child5.put("lppPhysical.count", "cash_unit->ulCount");
cassettes.put("event_date", "2018-06-11T09:35:48.867Z");
cassettes.put("event_log", "2018-06-11 09:35:43,253 - recycler [TRACE]: Running recycler::WITHDRAW");
children.push_back(std::make_pair("", child1));
children.push_back(std::make_pair("", child2));
children.push_back(std::make_pair("", child3));
children2.push_back(std::make_pair("", child4));
children2.push_back(std::make_pair("", child5));
cassettes.add_child("cassettes", children);
write_json("C:\\Temp\\test.json", cassettes);`
Summarizing, I'm having difficulties to put an array of objects inside of an array of objects.

Finally found a solution to my case, it's pretty simple but was hard to find since I am not too familiar with this library.
//LppPhysical insertion
lppPhysicalInfo.put("positionName", ph_unit->lpPhysicalPositionName);
lppPhysicalInfo.put("count", cash_unit->ulCount);
lppPhysical.push_back(std::make_pair("", lppPhysicalInfo));
//Cassettes insertions
cassettesInfo.put("value", cash_unit->ulValues);
cassettesInfo.put("currency", std::string(cash_unit->cCurrencyID).substr(0, 3));
cassettesInfo.put("CDMType", cash_unit->usType);
cassettesInfo.add_child("lppPhysical", lppPhysical.get_child(""));
cassettes.push_back(std::make_pair("", cassettesInfo));
//External information insertion
arquivo.put("event_date", "DateValue");
arquivo.put("event_log", "LogValue");
arquivo.add_child("cassettes", cassettes);
//Json creator
write_json("C:\\Temp\\assassino.json", arquivo);
On the lppPhysical I just make a pair with all its content and on the Cassettes' insertion I just added the lppPhysical as a child of the cassettes and that's it. Now the lppPhysical is an array of object insde the Cassettes which is also an array of objects

Related

How to return json string of arrays using rapidjson

I have a json file which looks like this
{
"ActivityId":"CB8FA1DA-DCB4-40B3-9D12-2786BD89B4D4",
"AdditionalParams":{
},
"Extensions":[
{
"Id":"1234",
"IsEnabled":false,
"Name":"Name1"
},
{
"Id":"4567",
"IsEnabled":false,
"Name":"Name2"
},
{
"Id":"8910",
"IsEnabled":true,
"Name":"Name3"
}
]
}
I see a lot of code online which tries to get the IsEnabled,Name fields(as an example). However, I am trying to use rapidjson to print out the array of extensions as is.
Here is the code that I have tried
Document document;
document.Parse(json);
if (document.HasMember(L"Extensions")) {
eventPayload = document[L"Extensions"].GetString();
}
document[L"Extensions"] is not a string, it's an array, so you will have to first getArray, then iterate through it with a JSONIterator and then get the value of IsEnabled.
Also, you don't have to use L"", since they are normal strings and not wide strings.

type '_InternalLinkedHashMap<dynamic, dynamic>' is not a subtype of type 'List<Map<String, dynamic>>' of 'function result'

I have looked at the following links before writing this question:
Unhandled Exception: InternalLinkedHashMap<String, dynamic>' is not a subtype of type 'List<dynamic>
Dart - how _InternalLinkedHashMap<String, dynamic> convert to Map<String, dynamic>?
https://medium.com/codespace69/flutter-json-decode-type-internallinkedhashmap-dynamic-dynamic-is-not-a-subtype-of-type-9d6b3e982b59
The code:
List data = [
{
"title" : "Particulate matter",
"desc" : "The term used to describe a mixture of solid particles and liquid droplets found in the air",
"amt" : 500,
"diseases" : "Particulate matter is responsible for asthma in many people. Also, a topic dermatitis, allergic rhinitisare diseases that can be caused by this",
"precaution" : "Switching to cleaner appliances and reducing the amount of smoking will surely ensure less exposureof particulate matter in the environment",
"image" : 'https://images.pexels.com/photos/3668372/pexels-photo-3668372.jpeg?auto=compress&cs=tinysrgb&dpr=2&h=750&w=1260',
"color" : Color(0xFF6E7E5D).withOpacity(.3)
},
// loads of data in this structure
];
// the line of error
Map indexedData = Map<String, dynamic>.from(data[index]);
I simply don't know why the error exists so please help me out. Thank You!
EDIT: I can change the data to a limit if that helps solve the problem
List data = [
{
"title": "Particulate matter",
"desc":
"The term used to describe a mixture of solid particles and liquid droplets found in the air",
"amt": 500,
"diseases":
"Particulate matter is responsible for asthma in many people. Also, a topic dermatitis, allergic rhinitisare diseases that can be caused by this",
"precaution":
"Switching to cleaner appliances and reducing the amount of smoking will surely ensure less exposureof particulate matter in the environment",
"image":
'https://images.pexels.com/photos/3668372/pexels-photo-3668372.jpeg?auto=compress&cs=tinysrgb&dpr=2&h=750&w=1260',
"color": Color(0xFF6E7E5D).withOpacity(.3),
},
];
Map<String, dynamic> indexedData = {};
data.forEach((mapElement) {
Map map = mapElement as Map;
map.forEach((key, value) {
indexedData[key] = value;
});
});
print(indexedData);
You should declare the type like this:
Map<String, dynamic> indexedData = Map<String, dynamic>.from(data[index]);

mongodb upsert doesn't update if locked

I have an app written in C++ with 16 threads which reads from the output of wireshark/tshark. Wireshark/tshark dissects pcap files which are gsm_map signalling captures.
Mongodb is 2.6.7
The structure I need for my documents are like this:
Note "packet" is an array, it will become apparent why later.
For all who don't know TCAP, the TCAP layer is transaction-oriented, this means, all packets include:
Transaction State: begin/continue/end
Origin transaction ID (otid)
Destination transaction ID (dtid)
So for instance, you might see a transaction comprising 3 packets, which looking at the TCAP layer would be roughly this
Two packets, one "begin", one "end".
{
"_id" : ObjectId("54ccd186b8ea19c89ee8f231"),
"deleted" : "0",
"packet" : {
"datetime" : ISODate("2015-01-31T12:58:11.939Z"),
"signallingType" : "M2PA",
"opc" : "326",
"dpc" : "6406",
"transState" : "begin",
"otid" : "M2PA0400435B",
"dtid" : "",
"sccpCalling" : "523332075100",
"sccpCalled" : "523331466304",
"operation" : "mo-forwardSM (46)",
...
}
}
/* 1 */
{
"_id" : ObjectId("54ccd1a1b8ea19c89ee8f7c5"),
"deleted" : "0",
"packet" : {
"datetime" : ISODate("2015-01-31T12:58:16.788Z"),
"signallingType" : "M2PA",
"opc" : "6407",
"dpc" : "326",
"transState" : "end",
"otid" : "",
"dtid" : "M2PA0400435B",
"sccpCalling" : "523331466304",
"sccpCalled" : "523332075100",
"operation" : "Not Found",
...
}
}
Because of the network architecture, we're tracing in two (2) points, and the traffic is balanced amongst these two points. This means sometimes we see "continue"s or "end"s BEFORE a "begin". Conversely, we might see a "continue" BEFORE a "begin" or "end". In short, transactions are not ordered.
Moreover, multiple end-points are "talking" amongst themselves, and transactionIDs might get duplicated, 2 endpoints could be using the same tid and other 2 endpoints at the same time, though this doesn't happen all the time, it does happen.
Because of the later, I also need to use the SCCP layer's "calling" and "called" Global titles (like phone numbers).
Bear in mind that I don't know which way a given packet is going, so this is what I'm doing:
Whenever I get a new packet I must find whether the transaction already exists in mongodb, I'm using upsert to do this.
I do this by searching the current's packet otid or dtid in either otid or dtid of existing packets
If it does: push the new packet into the existing document.
If it doesn't: create a new document with the packet.
As an example, this is a upsert for an "end" which should find a "begin":
db.runCommand(
{
update: "packets",
updates:
[
{ q:
{ $and:
[
{
$or: [
{ "packet.otid":
{ $in: [ "M2PA042e3918" ] }
},
{ "packet.dtid":
{ $in: [ "M2PA042e3918" ] }
}
]
},
{
$or: [
{ "packet.sccpCalling":
{ $in: [ "523332075151", "523331466305" ] }
},
{ "packet.sccpCalled":
{ $in: [ "523332075151", "523331466305" ] }
}
]
}
]
},
{
$setOnInsert: {
"unique-id": "422984b6-6688-4782-9ba1-852a9fc6db3b", deleted: "0"
},
$push: {
packet: {
datetime: new Date(1422371239182),
opc: "327", dpc: "6407",
transState: "end",
otid: "", dtid: "M2PA042e3918", sccpCalling: "523332075151", ... }
}
},
upsert: true
}
],
writeConcern: { j: "1" }
}
)
Now, all of this works, until I put it in production.
It seems packets are coming way to fast and I see lots of:
"ClientCursor::staticYield Can't Unlock B/c Of Recursive Lock" Warnings
I read that we can ignore this warning, but I've found that my upserts DO NOT update the documents! It looks like there's a lock and mongodb forgets about the update. If I change the upsert to a simple insert, no packets are lost
I also read this is related to no indexes being used, I have the following index:
"3" : {
"v" : 1,
"key" : {
"packet.otid" : 1,
"packet.dtid" : 1,
"packet.sccpCalling" : 1,
"packet.sccpCalled" : 1
},
"name" : "packet.otid_1_packet.dtid_1_packet.sccpCalling_1_packet.sccpCalled_1",
"ns" : "tracer.packets"
So in conclusion:
1.- If this index is not correct, can someone please help me creating the correct index?
2.- Is it normal that mongo would NOT update a document if it finds a lock?
Thanks and regards!
David
Why are you storing all of the packets in an array? Normally in this kind of situation it's better to make each packet a document on its own; it's hard to say more without more information about your use case (or, perhaps, more knowledge of all these acronyms you're using :D). Your updates would become inserts and you would not need to do the update query. Instead, some other metadata on a packet would join related packets together so you could reconstruct a transaction or whatever you need to do.
More directly addressing your question, I would use an array field tids to store [otid, dtid] and an array field sccps to store [sccpCalling, sccpCalled], which would make your update query look like
{ "tids" : { "$in" : ["M2PA042e3918"] }, "sccps" : { "$in" : [ "523332075151", "523331466305" ] } }
and amenable to the index { "tids" : 1, "sccps" : 1 }.

Using Dojo Grid with REST (tastypie)

I am experimenting with Dojo, using a DataGrid/JsonRestStore against a REST-service implemented using Django/tastypie.
It seems that the JsonRestStore expects the data to arrive as a pure array, whilst tastypie returns the dataset within a structure containing "schema" and "objects".
{
"meta": {"limit": 20, "next": null, "offset": 0, "previous": null, "total_count": 1},
"objects": [{...}]
}
So, what I need is to somehow attach to the "objects" part.
What is the most sensible way to achieve this ?
Oyvind
Untested, but you might try creating a custom store that inherits from JsonRestStore and override the internal _processResults method. It's a two-liner in the Dojo 1.7 code base, so you can implement you own behavior quite simply.
_processResults: function(results, deferred){
var count = results.objects.length;
return {totalCount: deferred.fullLength || (deferred.request.count == count ? (deferred.request.start || 0) + count * 2 : count), items: results.objects};
}
See lines 414-417 of the dojox/data/JsonRestStore.js for reference.
I don't know whether this will helpful for you or not. http://jayapal-d.blogspot.in/2009/08/dojo-datagrid-with-editable-cells-in.html

Mongodb - regex match of keys for subdocuments

I have some documents saved in a collection (called urls) that look like this:
{
payload:{
url_google.com:{
url:'google.com',
text:'search'
}
}
},
{
payload:{
url_t.co:{
url:'t.co',
text:'url shortener'
}
}
},
{
payload:{
url_facebook.com:{
url:'facebook.com',
text:'social network'
}
}
}
Using the mongo CLI, is it possible to look for subdocuments of payload that match /^url_/? And, if that's possible, would it also be possible to query on the match's subdocuments (for example, make sure text exists)?
I was thinking something like this:
db.urls.find({"payload":{"$regex":/^url_/}}).count();
But that's returning 0 results.
Any help or suggestions would be great.
Thanks,
Matt
It's not possible to query against document keys in this way. You can search for exact matches using $exists, but you cannot find key names that match a pattern.
I assume (perhaps incorrectly) that you're trying to find documents which have a URL sub-document, and that not all documents will have this? Why not push that type information down a level, something like:
{
payload: {
type: "url",
url: "Facebook.com",
...
}
}
Then you could query like:
db.foo.find({"payload.type": "url", ...})
I would also be remiss if I did not note that you shouldn't use dots (.) is key names in MongoDB. In some cases it's possible to create documents like this, but it will cause great confusions as you attempt to query into embedded documents (where Mongo uses dot as a "path separator" so to speak).
You can do it but you need to use aggregation: Aggregation is pipeline where each stage is applied to each document. You have a wide range of stages to perform various tasks.
I wrote an aggregate pipeline for this specific problem. If you don't need the count but the documents itself you might want to have a look at the $replaceRoot stage.
EDIT: This works only from Mongo v3.4.4 onwards (thanks for the hint #hwase0ng)
db.getCollection('urls').aggregate([
{
// creating a nested array with keys and values
// of the payload subdocument.
// all other fields of the original document
// are removed and only the filed arrayofkeyvalue persists
"$project": {
"arrayofkeyvalue": {
"$objectToArray": "$$ROOT.payload"
}
}
},
{
"$project": {
// extract only the keys of the array
"urlKeys": "$arrayofkeyvalue.k"
}
},
{
// merge all documents
"$group": {
// _id is mandatory and can be set
// in our case to any value
"_id": 1,
// create one big (unfortunately double
// nested) array with the keys
"urls": {
"$push": "$urlKeys"
}
}
},
{
// "explode" the array and create
// one document for each entry
"$unwind": "$urls"
},
{
// "explode" again as the arry
// is nested twice ...
"$unwind": "$urls"
},
{
// now "query" the documents
// with your regex
"$match": {
"urls": {
"$regex": /url_/
}
}
},
{
// finally count the number of
// matched documents
"$count": "count"
}
])