I am using the new C++ driver to access MongoDB from my C++ program.
Through the tutorial I am able to fetch an entire collection from the DB.
I am also able to specify filters so I only get a few.
But once I get the collection data into the program, there is only a single example available for inspecting the data:
for (auto&& doc : cursor) {
std::cout << bsoncxx::to_json(doc) << std::endl;
}
I would like to know how to get the count of the collection
I would also like to know how to get number "i" in the return data, i.e.,:
cursor[i] or similar... which of course doesn't work.
Thanks for pointing out this oversight in our examples. If you would, please file a bug in the Documentation component at https://jira.mongodb.org/browse/CXX requesting that our examples include more detail on how to access data on the client.
You have two questions here, really:
How can I get a count? The unhelpful answer is that you could probably write std::distance(cursor.begin(), cursor.end()), but you probably don't want to do that, as it would require pulling all of the data back from the server. Instead, you likely want to invoke mongocxx::collection::count.
How can I get the Nth element out of a cursor? First, are you sure this is what you want? The obvious way would be to do auto view = *std::next(cursor.begin(), N-1), but again, this probably isn't what you want for the reasons above, and also because the order is not necessarily specified. Instead, have a look at mongocxx::options::find::sort, mongocxx::options::find::limit, and mongocxx:options::find::skip, which should give you finer control over what data is returned via the cursor, and in what order.
Many thanks, acm!
I filed the bug and I figured out how to do it. To help others, let me post the two code examples here:
auto db = conn["db-name"];
int count = db["collection-name"].count( {} );
And
mongocxx::options::find opts;
opts.limit( 1 );
auto cursor = db["db-name"].find({ }, opts);
bsoncxx::document::view doc = *cursor.begin();
std::cout << bsoncxx::to_json(doc) << std::endl;
Related
I'm very new to RedHawk and I have the following scenario:
I have three component A B and C, B and C both have a property skill, which is one keyword describing what B or C's capability. The flow is: A starts and queries B.skill and C.skill so A knows what B and C can do. Then when A encounters a task that fits in B's or C's skill set, A will start up that specific component to do the task.
My question is: how does component A access a property of B? I looked up online and found some simple redhawk query introduction (https://redhawksdr.github.io/Documentation/mainch4.html section 4.6.2), but I hope if someone can show me a code snippet that demos how A accesses B's property. Also, I can't find any detailed documentation of the query api. It would be great if someone can direct me to it.
Thank you.
This example could probably get cleaned up a bit but in my example snippet, CompA has two output ports, both of type resource with the names compB_connection and compC_connection. We can then connect to compB and compC's resource port (also called the lollipop port) which is a direct connection to the component itself since it inherits from the resource API. This gives us access to methods on the component like start, stop, configure, query etc. For a full list see the idl files.
CompB and CompC both have a property with the id of "skill". We can use the query API to query the values of those properties.
std::string CompA_i::any_to_string(CORBA::Any value) {
std::ostringstream result;
const char* tmp;
value >>= tmp;
result << tmp;
return result.str();
}
int CompA_i::serviceFunction() {
CF::Properties compB_props, compC_props;
compB_props.length(1);
compC_props.length(1);
compB_props[0].id = "skill";
compC_props[0].id = "skill";
compB_connection->query(compB_props);
compC_connection->query(compC_props);
std::cout << "CompB Skills: " << any_to_string(compB_props[0].value) << std::endl;
std::cout << "CompC Skills: " << any_to_string(compC_props[0].value) << std::endl;
return FINISH;
}
Now when we connect CompA up to CompB and CompC and Start the waveform, or sandbox we get the following output:
CompB Skills: nunchuck skills
CompC Skills: bow hunting skills
The any_to_string method was found in prop_helpers.cpp in the core framework code; there is probably a helper function in a header file somewhere that would be a better fix.
There is a great MongoDb C++ Driver. The only thing that makes it hard for newbies like me to use it - is the lack of teeny-weeny examples. For instance, I know there is a method called getCollectionNames, but I'm not sure how to use it. In Python I would do it like this:
db = MongoClient(host, port)[db_name]
colls = db.collection_names()
and I'm done. But I don't feel so comfortable with C++ and can not figure out myself how to convert raw function declarations in documentation to some working code.
So, this is what I've done by now and see that it works:
ConnectionString cs = ConnectionString::parse(uri, errmsg);
DBClientBase * conn(cs.connect(errmsg));
Now I want to make one step further and get all collection names. Please, give some advice.
EDIT
Well, I found a method somewhere in dbclientinterface.h called getCollectionNames. It is defined like so:
std::list<std::string> getCollectionNames( const std::string& db,
const BSON& filter = BSONObj())
But I find this sole declaration without any informative hints completely useless. It is just a sum of letters and no more.
EDIT
I found a solution and I will post it below.
This is the solution:
std::string uri = "mongodb://127.0.0.1:27017/mydb";
std::string errmsg;
ConnectionString cs = ConnectionString::parse(uri, errmsg);
DBClientBase * conn(cs.connect(errmsg));
std::list<std::string> colls = conn->getCollectionNames("mydb");
for(std::list<std::string>::iterator it = colls.begin();it != colls.end();++it){
do_something(*it);
}
I am using the latest version of the new C++ Mongodb driver/library (not the legacy, 26compat or C version) along with the Qt framework (latest 64b on Linux). Within the same program I am successfully reading and writing to the database and everything is working well.
I realize this version is unstable, but I don't want the boost dependencies and it is a new project, that only I am working on.
I'm not a professional programmer, so please forgive any knowledge gaps.
In my database I have a supporting collection that just remembers the last project a user was working on and I want to do is use a value stored within that document as a string with a field name, to load that project when the program is started.
I am wanting to use the key stored within the m_Current_Project_key variable to load the project data from the project collection.
In the code below the first line after the find statement, carries out the search using different hard coded field name and data in the same collection, just to prove the code works more generally.
The problem I am having is getting the program to search for a specific "_id" that I can see is correctly in the collection and document from the mongo command line.
The comments on the end of the lines of code below show the output achieved for different things I have tried.
This sits within a method that reads a different collection from the same database and get a value from within it, that it puts in the m_Current_Project_key variable which is a QString.
qDebug() << m_Current_Project_key; // "553b976484e4b5167e39b6f1"
qDebug() << Utility::format_key(m_Current_Project_key); // "ObjectId("553b976484e4b5167e39b6f1")" - this utility function just modifies the value passed to it to look like the output
QString test = Utility::format_key(m_Current_Project_key);
test.remove('\"');
qDebug() << test; // "ObjectId(553b976484e4b5167e39b6f1)"
char const *c = m_Current_Project_key.toStdString().c_str();
qDebug() << c; // 553b976484e4b5167e39b6f1
bsoncxx::oid hhh(c, 12);
qDebug() << hhh.get_time_t(); // 892679010
auto cursor = db["project"].find(document{}
// << "title" << "Testing Project"
<< "_id"
<< c
// << hhh
// << m_Current_Project_key.toStdString()
// << m_Current_Project_key.toStdString().c_str()
// << Utility::format_key(m_Current_Project_key).toStdString()
// << test.toStdString()
<< finalize);
The cursor only points to a value when I use the title line above, without the next two lines - the value I get is the document I want, but in the real situation the only thing the program would know, at this point would be the "_id". The project name might not be unique.
I have tried casting the std::string to an OID, but that wasn't recognised as a type.
I've done a lot of Googling and a lot of trying things out and I can't believe there isn't a straight forward way to find a document based on it's "_id". In the examples the only finding examples use values other than the "_id".
db.project.find({ "_id" : ObjectId("553b976484e4b5167e39b6f1")}, { title : 1 })
Does what I want on the Mongo command line.
I would appreciate any assistance I could get with this, I have spent a lot of time trying.
Thanks.
The issue here is that you are using the wrong bsoncxx::oid constructor. When creating an oid from a std::string of the hex representation of the ObjectId (e.g. "553b976484e4b5167e39b6f1") you should use the single-argument constructor that takes a stdx::string_view.
The correct code looks like this:
using bsoncxx::stdx::string_view;
auto cursor = db["project"].find(document{}
<< "_id"
<< bsoncxx::oid{stdx::string_view{m_Current_Project_key.toStdString()}}
<< finalize
);
I am deserializing a json string into an object using rapidjson. When I encounter an issue, not with the structure of the json, but with the content, I want to report an error stating the offset of where the problem is.
Unfortunately, unless it is a parse error, I don't see where I can get the current offset of a Value within a Document. Anyone have any ways of accomplishing this?
For example:
Document doc;
doc.Parse<0>(json.c_str());
if( doc.HasMember( "Country" ) ) {
const Value& country_node = doc["Country"];
if( !isValid(country_node.GetString()) )
cout << "Invalid country specified at position " << country_node.Offset()?????
}
Unfortunately, RapidJSON does not support this in the DOM API.
If you use the SAX API, when you encounter an invalid value, you can return false in the handler function, and the Reader will generate a kParseErrorTermination error with the offset.
The reason why this is not supported in DOM because this will incur memory overhead and may only be used rarely. Please drop an issue at GitHub if you would like to further discuss this feature with the community.
I am new to protobuf and I have started considering the following trivial example
message Entry {
required int32 id = 1;
}
used by the c++ code
#include <iostream>
#include "example.pb.h"
int main() {
std::string mySerialized;
{
Entry myEntry;
std::cout << "Serialization succesfull "
<< myEntry.SerializeToString(&mySerialized) << std::endl;
std::cout << mySerialized.size() << std::endl;
}
Entry myEntry;
std::cout << "Deserialization successfull "
<< myEntry.ParseFromString(mySerialized) << std::endl;
}
Even if the "id" field is required, since it has not been set, the size of the serialization buffer is 0 (??).
When I deserialize the message an error occurs:
[libprotobuf ERROR google/protobuf/message_lite.cc:123] Can't parse message of type "Entry" because it is missing required fields: id
Is it a normal behavior?
Francesco
ps- If I initialize "id" with the value 0, the behavior is different
pps- The SerializeToString function returns true, the ParseFromString returns false
I dont think I exactly understand your question, but I'll have a go at the answer anyways. Hope this helps you in some way or the other :)
Yes this is normal behavior. You should add required only if the field is important to the message. It makes sense semantically. (why would you skip a required field). To enforce this, protobuf would not parse the message.
It sees that the field marked with number 1 is required, and the has_id() method is returning false. So it wont parse the message at all.
In the developer guide it is advised not to use required fields.
Required Is Forever You should be very careful about marking fields as required. If at some point you wish to stop writing or sending a required field, it will be problematic to change the field to an optional field – old readers will consider messages without this field to be incomplete and may reject or drop them unintentionally. You should consider writing application-specific custom validation routines for your buffers instead. Some engineers at Google have come to the conclusion that using required does more harm than good; they prefer to use only optional and repeated. However, this view is not universal.
Also
Any new fields that you add should be optional or repeated. This means that any messages serialized by code using your "old" message format can be parsed by your new generated code, as they won't be missing any required elements. You should set up sensible default values for these elements so that new code can properly interact with messages generated by old code. Similarly, messages created by your new code can be parsed by your old code: old binaries simply ignore the new field when parsing. However, the unknown fields are not discarded, and if the message is later serialized, the unknown fields are serialized along with it – so if the message is passed on to new code, the new fields are still available. Note that preservation of unknown fields is currently not available for Python