I'm using C++ binding for ZMQ (cppzmq) and I'm trying to set the connection timeout of TCP socket using a .setsockopt()-method like this:
int connectTimeout = 1000;
socket.setsockopt(ZMQ_CONNECT_TIMEOUT, &connectTimeout, sizeof(connectTimeout));
socket.connect(clientConfiguration.uri);
However, I dont see anything (exception thrown?) happening until code reaches actual .send()/.recv() on the socket. Just to make sure the socket has a chance to throw I put a sleep between .connect() and .send() methods.
According to the documentation .zmq_connect() just enters a READY-state without making actual connection to the endpoint. So the question is when and how I should experience the connection timeout?
So the question is when and how I should experience the connection timeout ?
When ?
Well, actually never directly as this is just the API-exposed setting of ZeroMQ Context()-instances' internal Finite-State-Machine modus operandi ( here the .setsockopt() sets the selected transport-class behind-the-API-curtain ISO-OSI-L3 details ).
How( if at all ) ?
Well, there are some other .setsockopt() details, that ( if put on ) may indirectly sense the impact of the set ZMQ_CONNECT_TIMEOUT connection timeout. Here again, only indirectly, by a modified FSM-behaviour, i.e. in a way, how the .Context()-engine instance will happen to respond to such event ( all purely internally, behind the Curtain of API - that's why we methodologically use the API method for separation of concerns, don't we ? ).
For further details ref.:
API details about ZMQ_IMMEDIATE,
API details about ZMQ_RECONNECT_IVL,
API details about ZMQ_RECONNECT_IVL_MAX.
( API versions evolve, be aware that not all distributed-system agents share the same ZeroMQ API version. So best remember the Zen-of-Zero and feel free to re-use the anxient designers' directive #ASSUME NOTHING. )
A TRAILER BONUS :
If not familiar with the ZeroMQ instrumentation, one may find useful this 5-seconds read into the main conceptual differences in the [ ZeroMQ hierarchy in less than a five seconds ] Section,
( courtesy Martin Sústrik, co-father of both ZeroMQ + nanomsg. Respect! )
Related
I am using zeromq to create a generic dynamic graph setup. I already have a XPUB/XSUB setup but am wondering if there is a zmq way of adding a sequence number/timestamp to each message going through generated by the proxy in order to have a uniquely sequenced “tape” of events?
Q : "... but am wondering if there is a zmq way of adding ... to each message ...?"
No, there is not. ZeroMQ way would be to have this done with Zero-Copy and (almost) Zero-Latency.
Such way does not exist for your wished use-case.
Solution ? Doable :
Create a transforming-node, where each message will get transformed accordingly ( SEQ-number added and TimeSTAMP datum { pre | ap }-pended ). Such step requires one to implement such a node and handle all such steps altogether with any exceptions per incident.
The ready-made API-documented zmq_proxy() simply does not and cannot and should not cover these specific requirements, as it was designed for other purposes ( and uses a Zero-Copy for the most efficient pass-through + ev. efficient MITM-logger mode(s) of service ).
In the unary RPC example provided in the grpc Github (client) and (server), is there any way to detect client's closed connection?
For example, in server.cc file:
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
int p = 0,i=0;
while(i++ < 1000000000) { // some dummy work
p = p + 10;
}
responder_.Finish(reply_, Status::OK, this);
With this dummy task before sending the response back to the client, server will take a few seconds. If we close the client (for example say with Ctrl+C), the server does not throw any error. It simply calls Finish and then deallocates the object as if the Finish is successful.
Is there any async feature (handler function) on the server-side to get us notified that the client has closed the connection or client is terminated?
Thank You!
Unfortunately, no.
But now guys from gRPC team works hard to implement callback mechanism into C++ implementation. As I understand it will work the same way as on Java implementation( https://youtu.be/5tmPvSe7xXQ?t=1843 ).
You can see how to work with future API with next examples: client_callback.cc and server_callback.cc
And the point of your interest there is ServerBidiReactor class from ::grpc::experimental namespace for server side. It have OnDone and OnCancel notification methods that maybe can help you.
Another interesting point there is that you can store a pointers to connection object and send notifications to client at any time.
But it still have many issue and I don't recommend to use this API in production code.
Current progress of C++ callbacks implementation you can see there: https://github.com/grpc/grpc/projects/12#card-12554506
Extended PUB/SUB topology
I have multiple publishers and multiple subscribers in a use case with 1 intermediary.
In the ZeroMQ guide, I learnt about synchronizing 1 publisher and 1 subscriber, using additional REQ/REP sockets. I tried to write a synchronization code for my use case, but it is getting messy if I try to write code according to logic given for 1-1 PUB/SUB.
The publisher code when we have only 1 publisher is :
//Socket to receive sync request
zmq::socket_t syncservice (context, ZMQ_REP);
syncservice.bind("tcp://*:5562");
// Get synchronization from subscribers
int subscribers = 0;
while (subscribers < SUBSCRIBERS_EXPECTED) {
// - wait for synchronization request
s_recv (syncservice);
// - send synchronization reply
s_send (syncservice, "");
subscribers++;
}
The subscriber code when we have only 1 subscriber is:
zmq::socket_t syncclient (context, ZMQ_REQ);
syncclient.connect("tcp://localhost:5562");
// - send a synchronization request
s_send (syncclient, "");
// - wait for synchronization reply
s_recv (syncclient);
Now, when I have multiple subscribers, then does each subscriber need to send a request to every publisher?
The publishers in my use case come and go. Their number is not fixed.
So, a subscriber won't have any knowledge about how many nodes to connect to and which publishers are present or not.
Please suggest a logic to synchronize an extended PUB/SUB code
Given the XPUB/XSUB mediator node is present,
the actual PUB-node discovery may be completely effort-less for the XSUB-mediator-side ( actually principally avoided as such ).
Just use the reversed the XSUB.bind()-s / PUB.connect()-s and the problem ceased to exist at all.
Smart, isn't it?
PUB-nodes may come and go, yet the XSUB-side of the Policy-mediator node need not bother ( except for a few initial .setsockopt( { LINGER, IMMEDIATE, CONFLATE, RCVHWM, MAXSIZE } ) performance tuning and robustness increasing settings ), enjoying the still valid and working composition of the actual Topic-filter(s) .setsockopt( zmq.SUBSCRIBE, ** ) settings in-service and may centrally maintain such composition remaining principally agnostic about the state/dynamic of the semi-temporal group of the now / later .connect()-ed live / dysfunctional PUB-side Agent-nodes.
Even better, isn't it?
i'm sure this must be really simple or i'm missing the point, but how do you disconnect from Mongo using the C++ driver and DBClientConnection? DBClient has a public member of 'connect' but no disconnect/kill/drop etc that I can find.
There is some talk (in stack overflow and on the web) of using ScopedDBConnection which does seem to be able to allow me to drop my connection - but there are very few examples of how it would be used - or info on when I should use that class over the DBClientConnection class.
Any ideas?
If you're using a DBClientConnection, it has one connection, and you aren't supposed to disconnect/reconnect. I guess it kills the connection when it calls the destructors. You can set it up to automatically reconnect so you can keep using it if it loses its connection.
If you want to have connection pooling and multiple connections, you want to use ScopedDBConnection. You can see some examples here: https://github.com/mongodb/mongo/blob/master/src/mongo/client/model.cpp
Here's the gist:
ScopedDbConnection conn("localhost");
mongo::BSONObjBuilder obj;
obj.append( "name" , "asd" );
conn->insert("test.test", obj);
conn.done();
Basically, you can do anything with conn that you can do with a DBClientConnection, but when you're done you call done().
Im developing a 'proxy' server in Thrift. My problem is, that each connection incomming to the proxy uses the same instance of the Handler. The client implementation of the proxy is in the Handler, so all the clients communicate throuh the same connection to the end server.
I have : n clients -> n sockets -> 1 handler -> 1 socket -> 1 server
What I want to implement : n clients -> n sockets -> n handlers -> n sockets -> 1 server
Now the problem is that if a client changes a 'local' parameter (something that is defined for each client independently) on the server, other clients will work with the changed environment too.
shared_ptr<CassProxyHandler> handler(new CassProxyHandler(adr_s,port_s,keyspace));
shared_ptr<TProcessor> processor(new CassandraProcessor(handler));
shared_ptr<TServerTransport> serverTransport(new TServerSocket(port));
shared_ptr<TTransportFactory> transportFactory(new TFramedTransportFactory());
shared_ptr<TProtocolFactory> protocolFactory(new TBinaryProtocolFactory());
TThreadedServer server(processor, serverTransport, transportFactory, protocolFactory);
server.serve();
Is there a way to implement a server, that creates a new instance of the Handler for each server socket instead of using the same handler?
Thanks for any suggestions or help,
#
I have managed to solve this problem. There was a solution already implemented in Java. I have used the same idea and implemented it in C++.
First thing I did is I created a TProcessorFactory instead of the TTransport class. This handles the TProcessors for each connection. It has a map structure in it, so its' get function returns the corresponding TProcessor for each TTransport. The corresponding (unique) TProcessor for each client.
I had to create a new TServer, so it would accept the newly created parameter TProcessorFactory instead of the TProcessor. In the TServer is also necessary to change a couple function calls. Your getProcessor function will no longer return a TProcessor but a TProcessorFactory (so change return type and rename).
The last thing you have to do is implement a server that allows instantiation, a derive class of TServer. I suggest using the TNonblockingServer (bit harder to implement the change) or the TThreadPoolServer. You have to change a couple function calls. Use a get function on the TProcessorFactory with a TTransport parameter to get a TProcessor where needed. The TTransport parameter is unique for each thread, each client connection is handled by one thread.
Also make sure you delete the old TProcessors, because thrift reuses (at least with the TNonblockingServer) the TTransport, so if you do not delete them and a client connects, he will probably get an inactive previous session and you probably don't want it. If you use shared pointers, just remove them from the map structure, when the client disconnects, and if there are no longer needed by thrift, they will be destructed.
I hope this helps to anyone, who encounters the same problem I did. If you don't know the inner structure of thrift, here a good guide : http://diwakergupta.github.com/thrift-missing-guide/
I hope the Thrift developers are going to implement something similar, but more sophisticated and abstract solution in the near future.
#
I know this is an old thread, but in case it's ever of use to anyone - I have contributed a change to the C# implementation of Thrift to solve this problem...
https://issues.apache.org/jira/browse/THRIFT-3397
In addition to the old method of passing a TProcessor as the first argument to the threaded servers, one can now set up something like
new ThreadPoolServer(processorFactory,serverTransport,
transportFactory,protocolFactory);
Where 'processorFactory' is a TProcessorFactory.
I've created TPrototypeProcessorFactory<TProcessor,Handler>(object[] handlerArgs) which would be set up like so:
TProcessorFactory processorFactory =
new TPrototypeProcessorFactory<ThriftGenerated.Processor, MyHandlerClass>();
The 'MyHandlerClass' implements your ThriftGenerated.Iface. Optionally, if this class takes arguments, they can be added as an array of objects to the processor factory.
Internally - For each new client connection, this processor factory will:
Create a new instance of 'MyHandlerClass' using any arguments
supplied (using Activator.CreateInstance)
If 'MyHandlerClass' implements 'TControllingHandler' it will set its
'server' property to the parent TServer (e.g. to allow control of
the TServer using a thift client)
Return a new instance of ThriftGenerated.Processor(handler)
Therefore for C# you get n clients -> n sockets -> n handlers -> n sockets -> 1 server
I hope this becomes useful to other people - it's certainly solved a problem for me.
Instead of making your proxy server talk thrift, you could just make it a generic TCP proxy that opens a new TCP connection for each incoming connection.