disconnecting from mongoDb with C++ driver - c++

i'm sure this must be really simple or i'm missing the point, but how do you disconnect from Mongo using the C++ driver and DBClientConnection? DBClient has a public member of 'connect' but no disconnect/kill/drop etc that I can find.
There is some talk (in stack overflow and on the web) of using ScopedDBConnection which does seem to be able to allow me to drop my connection - but there are very few examples of how it would be used - or info on when I should use that class over the DBClientConnection class.
Any ideas?

If you're using a DBClientConnection, it has one connection, and you aren't supposed to disconnect/reconnect. I guess it kills the connection when it calls the destructors. You can set it up to automatically reconnect so you can keep using it if it loses its connection.
If you want to have connection pooling and multiple connections, you want to use ScopedDBConnection. You can see some examples here: https://github.com/mongodb/mongo/blob/master/src/mongo/client/model.cpp
Here's the gist:
ScopedDbConnection conn("localhost");
mongo::BSONObjBuilder obj;
obj.append( "name" , "asd" );
conn->insert("test.test", obj);
conn.done();
Basically, you can do anything with conn that you can do with a DBClientConnection, but when you're done you call done().

Related

ZMQ socket connection timeout

I'm using C++ binding for ZMQ (cppzmq) and I'm trying to set the connection timeout of TCP socket using a .setsockopt()-method like this:
int connectTimeout = 1000;
socket.setsockopt(ZMQ_CONNECT_TIMEOUT, &connectTimeout, sizeof(connectTimeout));
socket.connect(clientConfiguration.uri);
However, I dont see anything (exception thrown?) happening until code reaches actual .send()/.recv() on the socket. Just to make sure the socket has a chance to throw I put a sleep between .connect() and .send() methods.
According to the documentation .zmq_connect() just enters a READY-state without making actual connection to the endpoint. So the question is when and how I should experience the connection timeout?
So the question is when and how I should experience the connection timeout ?
When ?
Well, actually never directly as this is just the API-exposed setting of ZeroMQ Context()-instances' internal Finite-State-Machine modus operandi ( here the .setsockopt() sets the selected transport-class behind-the-API-curtain ISO-OSI-L3 details ).
How( if at all ) ?
Well, there are some other .setsockopt() details, that ( if put on ) may indirectly sense the impact of the set ZMQ_CONNECT_TIMEOUT connection timeout. Here again, only indirectly, by a modified FSM-behaviour, i.e. in a way, how the .Context()-engine instance will happen to respond to such event ( all purely internally, behind the Curtain of API - that's why we methodologically use the API method for separation of concerns, don't we ? ).
For further details ref.:
API details about ZMQ_IMMEDIATE,
API details about ZMQ_RECONNECT_IVL,
API details about ZMQ_RECONNECT_IVL_MAX.
( API versions evolve, be aware that not all distributed-system agents share the same ZeroMQ API version. So best remember the Zen-of-Zero and feel free to re-use the anxient designers' directive #ASSUME NOTHING. )
A TRAILER BONUS :
If not familiar with the ZeroMQ instrumentation, one may find useful this 5-seconds read into the main conceptual differences in the [ ZeroMQ hierarchy in less than a five seconds ] Section,
( courtesy Martin Sústrik, co-father of both ZeroMQ + nanomsg. Respect! )

C++ Poco ODBC Transactions - AutoCommit mode

I am currently attempting to use transactions in my C++ app, but I have a problem with the ODBC's auto commit mode.
I am using the POCO libaries to create a connection to a PostgreSQL database on the same machine. Currently, I can send data to this database as single statements, but I cannot get my head around how to use Poco's transaction libraries to be able to send this data more quickly.
As I have several thousand records to insert, and so continuing to use single insert statements is extrememly slow and inpractical - So I am trying to use Poco's transaction to speed this up a bit (a fair bit).
The error I am encountering is a theoretically a simple one - Poco is throwing the following error:
'Invalid access: Session is in auto commit mode.'
I understand, as a result of this, I should somehow set "auto commit" to false - as it only allows me to commit data to the database line by line, rather than as a single transaction.
The problem is how I set this.
Currently, I have a session created from Session.h, that looks alot like this:
session = new Poco::Data::Session(
"ODBC",
connection_data.str()
);
Where connection data is a simple stringstream with the login information, password, database, server and "Driver={PostgreSQL ANSI};" to tell ODBC to utilize PostgreSQL's driver.
I have tried just setting a property "autocommit" to false through the session's setFeature or setProperty settings, this, of course, was to no avail. (it was more of a ditch attempt at this point).
session->setFeature("AUTOCOMMIT", false);
Looking around, I saw a possible alternative method by creating a ODBC sessionImpl directly from ODBC/session/SessionImpl.h instead of using this generic method above, and then creating a new session object from this.
The benefits of this are that ODBC's sessionImpl has references to autocommit mode in the header, which would suggest it would be able to handle this:
void autoCommit(const std::string&, bool val);
/// Sets autocommit property for the session.
However, having not used sessionImpl before, I cannot garuntee if this will work or if can can get this to work with the limited documentation available.
I am using C++ 03 (Not 11), with Visual Studio 2015
Poco 1.7.5
Boost (Where needed)
Would any one know the correct way of setting this feature (above) or a alternative method to achieving this?
edit: Looking at the source of poco, at:
https://github.com/pocoproject/poco/blob/develop/Data/ODBC/src/SessionImpl.cpp#L153
The property seems be named autoCommit, and looking at
https://github.com/pocoproject/poco/blob/develop/Data/include/Poco/Data/AbstractSessionImpl.h#L120
the case of the property names seem to matter. So, does it help if you use session->setFeature("autoCommit", false);?
Cant you just call session->begin(); and session->end(); on the corresponding Session object?
What is returned by session->canTransact()?
According to the doc begin() will start a new transaction, the doc does not mention any property that needs to be set before or after.
See: https://pocoproject.org/docs/Poco.Data.Session.html
Also faced a similar issue.
First of all before begin() need:
m_ses.setFeature("autoCommit", false);
m_ses.begin();
And the second issue is that this feature stays "autoCommit" in false for all other sessions. So don't forget for the next session call
session.setFeature("autoCommit", true);

Managing a global DB connection in C++

Is is possible to do the following safely:
I have a C++ library which connects to SQL DB at various points. I would like to have a global connection available at all of these points. Can this be done? IS there a standard pattern for this. I was thinking of storing a connection in a singleton.
Edit:
Suppose I have the following interface for the connection.
class Connection {
public:
Connection();
~Connection();
bool isOpen();
void open();
}
I would like to implement the following interface:
class GlobalConnection {
public:
static Connection & getConnection() {
static Connection conn_;
if (!conn_.isOpen())
conn_.open();
return conn_;
}
private:
GlobalConnection() {};
Connection conn_;
};
I have two concerns with the above. One is that the getConnection is not thread safe and the other is that I'm not sure about the destruction of the static resource. In other words, am I guaranteed that the connection will close (ie its destructor will be called).
For the record, the connection class itself is provided by the SQLAPI++ library (though that's not very relevant).
EDIT 2: After doing some research it seems that while SQLAPI doent directly support pooling it can be used to enable connection pooling through the ODBC facilities via the call
setOption("SQL_ATTR_CONNECTION_POOLING") = SQL_CP_ONE_PER_DRIVER
The documentation says that that this call must be made before the first connection is established. What is the best way to assure this in code with multiple potential call sites for opening a connection. What if this doesn't happen? Will an error be thrown or pooling just wont be enabled.
Also what tools are available for monitoring how many open connections there are to the DB?
A Singleton can solve this in any OO language. In C/C++, you can also use a static variable (in case you don't use a pure-OO coding style).
most client libaries support connection pooling.
so open a new connection will just pick a existing connection from the pool.

How to re-connect to MongoDB using C++ driver?

I've a C++ function which saves a document to MongoDB using C++ driver. It takes connection reference as argument:
http://pastebin.com/jwRDhNWQ
When I restart MongoDB, I can see that new connection is being made.
However, conn.isFailed() remains true.
This maybe happening due to the fact that when I reconnect, I am using conn and not &conn
When I do use &conn as in &conn.connect("localhost");, I get error message-
error: lvalue required as unary ‘&’ operand
How do I fix this? i.e. modify the underlying connection so that conn.isFailed() becomes false when a new connection has been established?
You should enable _autoReconnect in the mongo::DBClientConnection::DBClientConnection constructor.
http://api.mongodb.org/cplusplus/current/classmongo_1_1_d_b_client_connection.html#a6a1a348024dd302572504b7bfb6e74a2
The variable _failed returned by the method isfailed() is not set until _check Connection is called. _checkConnection is not called until something is sent to the database, so as an alternative, you could call the ping command before calling _isFailed. However, the recommended fix is to enable _autoReconnect.

Thrift - different Handler instance for each Socket

Im developing a 'proxy' server in Thrift. My problem is, that each connection incomming to the proxy uses the same instance of the Handler. The client implementation of the proxy is in the Handler, so all the clients communicate throuh the same connection to the end server.
I have : n clients -> n sockets -> 1 handler -> 1 socket -> 1 server
What I want to implement : n clients -> n sockets -> n handlers -> n sockets -> 1 server
Now the problem is that if a client changes a 'local' parameter (something that is defined for each client independently) on the server, other clients will work with the changed environment too.
shared_ptr<CassProxyHandler> handler(new CassProxyHandler(adr_s,port_s,keyspace));
shared_ptr<TProcessor> processor(new CassandraProcessor(handler));
shared_ptr<TServerTransport> serverTransport(new TServerSocket(port));
shared_ptr<TTransportFactory> transportFactory(new TFramedTransportFactory());
shared_ptr<TProtocolFactory> protocolFactory(new TBinaryProtocolFactory());
TThreadedServer server(processor, serverTransport, transportFactory, protocolFactory);
server.serve();
Is there a way to implement a server, that creates a new instance of the Handler for each server socket instead of using the same handler?
Thanks for any suggestions or help,
#
I have managed to solve this problem. There was a solution already implemented in Java. I have used the same idea and implemented it in C++.
First thing I did is I created a TProcessorFactory instead of the TTransport class. This handles the TProcessors for each connection. It has a map structure in it, so its' get function returns the corresponding TProcessor for each TTransport. The corresponding (unique) TProcessor for each client.
I had to create a new TServer, so it would accept the newly created parameter TProcessorFactory instead of the TProcessor. In the TServer is also necessary to change a couple function calls. Your getProcessor function will no longer return a TProcessor but a TProcessorFactory (so change return type and rename).
The last thing you have to do is implement a server that allows instantiation, a derive class of TServer. I suggest using the TNonblockingServer (bit harder to implement the change) or the TThreadPoolServer. You have to change a couple function calls. Use a get function on the TProcessorFactory with a TTransport parameter to get a TProcessor where needed. The TTransport parameter is unique for each thread, each client connection is handled by one thread.
Also make sure you delete the old TProcessors, because thrift reuses (at least with the TNonblockingServer) the TTransport, so if you do not delete them and a client connects, he will probably get an inactive previous session and you probably don't want it. If you use shared pointers, just remove them from the map structure, when the client disconnects, and if there are no longer needed by thrift, they will be destructed.
I hope this helps to anyone, who encounters the same problem I did. If you don't know the inner structure of thrift, here a good guide : http://diwakergupta.github.com/thrift-missing-guide/
I hope the Thrift developers are going to implement something similar, but more sophisticated and abstract solution in the near future.
#
I know this is an old thread, but in case it's ever of use to anyone - I have contributed a change to the C# implementation of Thrift to solve this problem...
https://issues.apache.org/jira/browse/THRIFT-3397
In addition to the old method of passing a TProcessor as the first argument to the threaded servers, one can now set up something like
new ThreadPoolServer(processorFactory,serverTransport,
transportFactory,protocolFactory);
Where 'processorFactory' is a TProcessorFactory.
I've created TPrototypeProcessorFactory<TProcessor,Handler>(object[] handlerArgs) which would be set up like so:
TProcessorFactory processorFactory =
new TPrototypeProcessorFactory<ThriftGenerated.Processor, MyHandlerClass>();
The 'MyHandlerClass' implements your ThriftGenerated.Iface. Optionally, if this class takes arguments, they can be added as an array of objects to the processor factory.
Internally - For each new client connection, this processor factory will:
Create a new instance of 'MyHandlerClass' using any arguments
supplied (using Activator.CreateInstance)
If 'MyHandlerClass' implements 'TControllingHandler' it will set its
'server' property to the parent TServer (e.g. to allow control of
the TServer using a thift client)
Return a new instance of ThriftGenerated.Processor(handler)
Therefore for C# you get n clients -> n sockets -> n handlers -> n sockets -> 1 server
I hope this becomes useful to other people - it's certainly solved a problem for me.
Instead of making your proxy server talk thrift, you could just make it a generic TCP proxy that opens a new TCP connection for each incoming connection.