String is : 2013-3-03 14:27:33 [main] INFO Main - Start
Corresponding regex is :
/^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}) \[(?<thread>.*)\] (?<level>[^\s]+)(?<message>.*)/
which produces result
"time" : "2013-3-03 14:27:33",
"thread" : "main",
"level" : "INFO",
"message" : " Main - Start"
Editing the actual log file is out of my control hence I want to make changes in regex to add some constant. The output which I want is
"time" : "2013-3-03 14:27:33",
"thread" : "main",
"level" : "INFO",
"message" : " Main - Start",
"app" : "abc"
what I have tried is something like below but is there some better way to acheive my requirement ?
/^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}) \[(?<thread>.*)\] (?<level>[^\s]+)(?<message>.*)(?<app_abc>)/
I'm writing C++ code using curl and JsonCpp (https://github.com/open-source-parsers/jsoncpp). Json::parseFromStream returns the following data:
Funds: [
{
"id" : 1,
"jsonrpc" : "2.0",
"result" :
{
"availableToBetBalance" : 437.91000000000003,
"discountRate" : 4.0,
"exposure" : 0.0,
"exposureLimit" : -5000.0,
"pointsBalance" : 3135,
"retainedCommission" : 0.0,
"wallet" : "UK"
}
}
]
How do I extract availableToBetBalance - I've tried things like:
std::string d = json_data["result.availableToBetBalance"].asString();
and:
std::string d = json_data["result"]["availableToBetBalance"].asString();
The latter throws and exception : in Json::Value::resolveReference(key, end): requires objectValue
You're ignoring the array layer, signified by the outer [ and ] characters.
In this particular case, the data you're looking for is in the first (and only) element of an array, so:
std::string d = json_data[0]["result"]["availableToBetBalance"].asString();
// ^^^
In mongoDB database iPodia and collection OVS_DETAILS, I have a record as
{
"_id" :
ObjectId("57ab14508b16c9557dcfa316"),
"dpid" : "202481588545212", "mac" : "b8:27:eb:28:a6:bc",
"extranet_gateway_mac" : "f0:b4:29:52:8f:b6",
"extranet_gateway_ip" : "192.168.31.1",
"extranet_public_ip" : "59.66.214.24",
"extranet_private_ip" : "192.168.31.118",
"extranet_netmask" : "255.255.255.0",
"intranet_cidr_prefix" : 22020096,
"intranet_cidr_length" : 29, "persist" : 0,
"timestamp" : 1470187766
}
auto cursor = db["OVS_DETAILS"].find({filter_builder.view});
for (auto&& doc : cursor) {
std::cout << bsoncxx::to_json(doc) << std::endl;
}
How can I resolve the result? For example, get the value with key "persist".
Once you have the bsoncxx::document::view from the cursor, you can access an element view with the [] operator (but keep in mind that's a linear search each time). Given an element view, you can check the type and extract a value of interest:
bsoncxx::document::element element = doc["mac"];
if(element.type() != bsoncxx::type::k_utf8) {
// Error
}
std::string mac = element.get_utf8().value.to_string();
For 'persist', you probably want to check the type for an integer type and then extract it with one of the get_XXX methods for integers. See the element documentation for more details.
I have an app written in C++ with 16 threads which reads from the output of wireshark/tshark. Wireshark/tshark dissects pcap files which are gsm_map signalling captures.
Mongodb is 2.6.7
The structure I need for my documents are like this:
Note "packet" is an array, it will become apparent why later.
For all who don't know TCAP, the TCAP layer is transaction-oriented, this means, all packets include:
Transaction State: begin/continue/end
Origin transaction ID (otid)
Destination transaction ID (dtid)
So for instance, you might see a transaction comprising 3 packets, which looking at the TCAP layer would be roughly this
Two packets, one "begin", one "end".
{
"_id" : ObjectId("54ccd186b8ea19c89ee8f231"),
"deleted" : "0",
"packet" : {
"datetime" : ISODate("2015-01-31T12:58:11.939Z"),
"signallingType" : "M2PA",
"opc" : "326",
"dpc" : "6406",
"transState" : "begin",
"otid" : "M2PA0400435B",
"dtid" : "",
"sccpCalling" : "523332075100",
"sccpCalled" : "523331466304",
"operation" : "mo-forwardSM (46)",
...
}
}
/* 1 */
{
"_id" : ObjectId("54ccd1a1b8ea19c89ee8f7c5"),
"deleted" : "0",
"packet" : {
"datetime" : ISODate("2015-01-31T12:58:16.788Z"),
"signallingType" : "M2PA",
"opc" : "6407",
"dpc" : "326",
"transState" : "end",
"otid" : "",
"dtid" : "M2PA0400435B",
"sccpCalling" : "523331466304",
"sccpCalled" : "523332075100",
"operation" : "Not Found",
...
}
}
Because of the network architecture, we're tracing in two (2) points, and the traffic is balanced amongst these two points. This means sometimes we see "continue"s or "end"s BEFORE a "begin". Conversely, we might see a "continue" BEFORE a "begin" or "end". In short, transactions are not ordered.
Moreover, multiple end-points are "talking" amongst themselves, and transactionIDs might get duplicated, 2 endpoints could be using the same tid and other 2 endpoints at the same time, though this doesn't happen all the time, it does happen.
Because of the later, I also need to use the SCCP layer's "calling" and "called" Global titles (like phone numbers).
Bear in mind that I don't know which way a given packet is going, so this is what I'm doing:
Whenever I get a new packet I must find whether the transaction already exists in mongodb, I'm using upsert to do this.
I do this by searching the current's packet otid or dtid in either otid or dtid of existing packets
If it does: push the new packet into the existing document.
If it doesn't: create a new document with the packet.
As an example, this is a upsert for an "end" which should find a "begin":
db.runCommand(
{
update: "packets",
updates:
[
{ q:
{ $and:
[
{
$or: [
{ "packet.otid":
{ $in: [ "M2PA042e3918" ] }
},
{ "packet.dtid":
{ $in: [ "M2PA042e3918" ] }
}
]
},
{
$or: [
{ "packet.sccpCalling":
{ $in: [ "523332075151", "523331466305" ] }
},
{ "packet.sccpCalled":
{ $in: [ "523332075151", "523331466305" ] }
}
]
}
]
},
{
$setOnInsert: {
"unique-id": "422984b6-6688-4782-9ba1-852a9fc6db3b", deleted: "0"
},
$push: {
packet: {
datetime: new Date(1422371239182),
opc: "327", dpc: "6407",
transState: "end",
otid: "", dtid: "M2PA042e3918", sccpCalling: "523332075151", ... }
}
},
upsert: true
}
],
writeConcern: { j: "1" }
}
)
Now, all of this works, until I put it in production.
It seems packets are coming way to fast and I see lots of:
"ClientCursor::staticYield Can't Unlock B/c Of Recursive Lock" Warnings
I read that we can ignore this warning, but I've found that my upserts DO NOT update the documents! It looks like there's a lock and mongodb forgets about the update. If I change the upsert to a simple insert, no packets are lost
I also read this is related to no indexes being used, I have the following index:
"3" : {
"v" : 1,
"key" : {
"packet.otid" : 1,
"packet.dtid" : 1,
"packet.sccpCalling" : 1,
"packet.sccpCalled" : 1
},
"name" : "packet.otid_1_packet.dtid_1_packet.sccpCalling_1_packet.sccpCalled_1",
"ns" : "tracer.packets"
So in conclusion:
1.- If this index is not correct, can someone please help me creating the correct index?
2.- Is it normal that mongo would NOT update a document if it finds a lock?
Thanks and regards!
David
Why are you storing all of the packets in an array? Normally in this kind of situation it's better to make each packet a document on its own; it's hard to say more without more information about your use case (or, perhaps, more knowledge of all these acronyms you're using :D). Your updates would become inserts and you would not need to do the update query. Instead, some other metadata on a packet would join related packets together so you could reconstruct a transaction or whatever you need to do.
More directly addressing your question, I would use an array field tids to store [otid, dtid] and an array field sccps to store [sccpCalling, sccpCalled], which would make your update query look like
{ "tids" : { "$in" : ["M2PA042e3918"] }, "sccps" : { "$in" : [ "523332075151", "523331466305" ] } }
and amenable to the index { "tids" : 1, "sccps" : 1 }.
I'm trying to apply a projection using $elemMatch with the mongoDB C++ driver (2.6compat):
Sample document:
{
"name" : "Tom",
"lists" : [
{"value" : 1}, {"value" : 2}, {"value" : 3}
]
}
I would like to get a document for Tom and ONLY value 1.
In the shell this will look like this:
> db.aaa.find({"name" : "Tom", "lists.value" : 1}, {"lists" : {$elemMatch : {"value" : 1} } })
Now, there's no suitable override over the query method which accepts a BSONObj for the projection part of the query. Am I missing something here?
Help will be much appreciated!
you are going to want to use the DBClientBase::query() method. It has this signature:
auto_ptr<DBClientCursor> DBClientBase::query(
const string &ns,
Query query,
int nToReturn,
int nToSkip,
const BSONObj *fieldsToReturn,
int queryOptions,
int batchSize
)
In the 2.6 compatible C++ driver there are no database or collection classes so you need to provide:
the namespace to query (ns)
the query itself (query)
the number of documents to return,
the number to skip (pass zero for default values)
the fieldsToReturn (a BSONObj representing the fields you would want back from the database.
So, for example, if you wanted to do the same thing as above you would write the query like this:
DBClientConnection conn;
conn.connect("localhost:27017");
BSONOBj projection = fromjson("{ lists: {\"$elemMatch:\": {value: 1} } }");
conn.query("db.aaa", Query("{}"), 0, 0, &projection)