ZMQ asynchronous, how does it work exactly? - c++

I am currently working on a project that recquire fast network management. To do so I choosed 0MQ, but after reading the documentation and example given by this one. There is something I hardly understand concerning the asynchronous part of 0MQ.
Is there any thread created for each request on a ROUTER or DEALER socket ?
I often do the mistake to combine asynchronous and multi-threaded. When I look at the man of zmqsocket I see that for a DEALER or ROUTER socket, the incoming routing is setted at "Fair-queued". From this I conclude asynchronous means you can write or read on the socket without waiting for an answer to send another request (everything is queued and process synchronously).
So here is the question,
Is there any thread created by 0MQ concerning each request ? (I am not talking about the background thread 0MQ use internally to manage message queueing).

Zeromq creates only one thread. No additional thread is created for request or a socket.
The background thread does all the work and the user thread communicate with the background threads using queues and file descriptors.
The background thread is using epoll or kqueue to do the asynchronous magic.
You can actually control the amount of background threads, but usually it is one.

Related

For a client server program, what is the best approach to receive multiple client connection requests in parallel?

The program is a client server socket application being developed with C on Linux. There is a remote server to which each client connects and logs itself as being online. There will be most likely be several clients online at any given point of time, all trying to connect to the server to log themselves as being online/busy/idle etc. So how can the server handle these concurrent requests. What's a good design approach (Forking/multithreading for each connection request maybe?)?
personally i would use the event driven approach for servers. there you register a callback that is called as soon as a connection arrives. and event callbacks whenever the socket is ready to read or write.
with a huge amount of connections you will have a great performance and resource benefit compared to threads. But i would also prefere this for a smaler count of connections.
i only would use threads if you really need to use multiple cores or if you have some request that could take longer to process and where it is too complicate to handle it without threads.
i use libev as base library to handle event driven networking.
Generally speaking, you want a thread pool to service requests.
A typical structure will start with a single thread that does nothing but queue up incoming requests. Since it doesn't do very much, it's typically pretty easy for one thread to keep up with the maximum speed of the network.
That puts the items into some sort of concurrent queue. Then you have a pool of other threads reading items from the queue, doing what's needed, then depositing the result in another queue (and repeating, and repeating until the servers shuts down).
Finally, you have another single thread that just takes items from the result queue, and sends replies out to the clients.
Best approach is a combination of event driven model with multithreaded model.
You create a bunch of nonblocking sockets, but threads count should be much fewver. I.e. 10 sockets per thread.
Then you just listen for an event (incoming request) on every thread in a non-blocking mode and process it as it happens.
This technique usually performs better then non-blocking sockets or multithreaded model separately.
Take a look at Comer's "Internetworking with TCP/IP" volume 3 (BSD sockets version), it has detailed examples for different ways of writing servers and clients. The full code (sans explanations, unfortunally) is on the web. Or rummage around in http://tldp.org, there you'll find a collection of tutorials.
select or poll or epoll
These are facilities on *nix systems to aggregate multiple event sources (connections) into a single waiting point. The server adds the connections to a data structure, and then waits by calling select etc. It gets woken up when stuff happens on any of these connections, figures out which one, handles it, and then goes back to sleep. See manual for details.
There are several higher level libraries built on top of these mechanisms, that make programming them somewhat easier e.g. libevent, libev etc.

POCO raise event on TCPServer connected threads

I'm new to Poco framework and not to good with C++ but I am learning. I have to create a server-client based application in windows.
The problem that I have now is that I need to send repeatedly from minute to minute some data to the clients. i need to do this for the clients that have an active tcp connection with the server. I don't know how can I create an event, or something that is triggered in a thread and starts all the active threads to send data to the clients.
My first idea is that I have to rewrite, or extend the TCPServerDispatcher Class. And I don't know how can I identify the active threads from the ThreadPool.
Do you have any ideas, or maybe suggestions, or a tutorial, something?
I can't figure it out how to do it...
Hope somebody can give me an idea, or some code example. Thank you.
Can these server<> client threads not obtain the data for themselves? It would be fairly easy to add a 60-second timeout on a read() in each thread and send the data then. Maybe this would involve too many database connections?
Failing that, can you put the latest data in a lockable object and have the threads just lock, write and unlock the latest data on a timeout? Such a solution should really have a write timeout as well to prevent a badly-behaved client causing its server thread to block while holding the lock. If it's not too large, I suppose the server<> client thread could make a copy of the data to send, but I'm not a great fan of copying, TBH.
There are more complex ways of signaling the server<> client threads that new data is avalable. It is quite possible to signal each thread that new data is available and have them act upon it 'immediately'. This usually means the server<> client thread waiting on more than one signal. In general, the lower the latency, the more complex the solution:(
Rgds,
Martin

Writing multithreaded TCP server on Linux

At work I have been tasked with implementing a TCP server as part of a Modbus slave device. I have done a lot of reading both here on stack exchange and on the internet in general (including the excellent http://beej.us/guide/bgnet/) but I am struggling with a design issue. In summary, my device can accept just 2 connections and on each connection will be incoming modbus requests which I must process in my main controller loop and then reply with success or failure status. I have the following ideas of how to implement this.
Have a listener thread that creates, binds, listens and accepts connections, then spawns a new pthread to listen on the connection for incoming data and close connection after an idle timeout period. If the number of active threads is currently 2, new connections are instantly closed to ensure only 2 are allowed.
Do not spawn new threads from the listener thread, instead use select() to detect incoming connection requests as well as incoming modbus connects on active connections (similar to the approach in Beejs guide).
Create 2 listener threads each of which creates a socket (same IP and port number) which can block on accept() calls, then close the socket fd and deal with the connection. Here I am (perhaps naively) assuming that this will only allow max of 2 connections which I can deal with using blocking reads.
I have been using C++ for a long time but I am fairly new to Linux development. I would really welcome any suggestions as to which of the above approaches is best (if any) and if my inexperience with Linux means that any of them are really really bad ideas. I am keen to avoid fork() and stick to pthreads as incoming modbus requests are going to be queued and read off a main controller loop periodically. Thanks in advance for any advice.
The third alternative won't work, you can only bind to the local address once.
I would probably use your second alternative, unless you need to do a lot of processing in which case a combination of the first to alternatives might be useful.
The combination of the two first alternative I'm thinking of is to have the main thread (the one you always have when a program starts) create two worker threads, then go a blocking accept call to wait for a new connection. When a new connection arrives, tell one of the threads to start working on the new connection and go back to block on accept. When the second connection is accepted you tell the other thread to work on that connection. If both connections are open already, either don't accept until one connection is closed, or wait for new connections but close them immediately.
All of the design option you propose are not very object oriented, and they're all geared more towards C than C++. If your work allows you to use boost, then the Boost.Asio library is fantastic for making simple (and complex) socket servers. You could take nearly any of their examples and trivially extend it to only allow 2 active connections, closing all others as soon as they are opened.
Off the top of my head, their simple HTTP server could be modified to do this by keeping a static counter in the connection class (inc in the constructor, dec in the destructor), and when a new one is created check the count and decide whether to close the connection. The connection class could also gain a boost::asio::deadline_timer to keep track of timeouts.
This would most closely resemble your first design choice, boost could do this in 1 thread and in the background does something similar to select() (usually epoll()). But this is the "C++ way", and in my opinion using select() and raw pthreads is the C way.
Since you are only dealing with 2 connections, thread per connection is perfect for this kind of application. Object oriented approaches using non-blocking or asynchronous I/O would be better if you needed to scale up to thousands of connections. 2 listener threads makes sense, you don't need to close the accept fd. Just come back to accept on it when the connection is completed. In fact, a variation is to have three threads blocked doing accept. If two of the threads are actively handling connections, then the third resets the newly created connection (or returns busy response, whatever is appropriate for your device).
To have all three threads block on accept, you need to have the main thread create and bind your socket before the three threads launch to do their accept/handle processing.
The man page for pthreads on Linux indicates that accept is thread-safe. (The section under thread-safe functions lists the functions that are not thread-safe, go figure.)

EvtSubscribe and threading

I am trying to write a log forwarded for Windows. The plan is simple - receive an event notification and then write it over a TCP socket. This MSDN example shows that I should be using EvtSubscribe. However, I am confused as to how I should share the file descriptor for the open TCP socket. Will the EvtSubscribe callback block by default or will it thread or...?
Thank you in advance for any tips, picking up C++ on Windows after C on Linux has been a bit of a challenge for me :)
The docs are quite sparse in details, but I reckon that it works as follows:
If you use the subscription callback, then it will be called in a dedicated thread. So, if you delay in it, it will block further callbacks, but not other thread of the program
If you use the SignalEvent, it will get signaled when the event arrives, and no threads are created automatically.
You can check that it is really another thread by calling GetCurrentThreadId() from the code that calls EvSubscribe() and from the callback, and compare the values.
My recommendation is to use the thread options, as the Event handlers in Windows are so difficult to be programmed correctly.
About sharing the TCP socket, you can share a socket between threads, but you should not write to it from more than one thread at a time. Nor read.
You can, however, read from one thread and write from another. Also, you can close the socket from one thread while other is in a blocking operation: it will get cancelled.
If you find this limiting, you should create a user thread and use it to send and/or receive data, while communicating with the other threads with queues, or similar.

Controlling the work of worker threads via the main thread

Hey I am not sure if this has already been asked that way. (I didn´t find anwsers to this specific questions, at least). But:
I have a program, which - at startup - creates an Login-window in a new UI-Thread.
In this window the user can enter data which has to be verified by an server.
Because the window shall still be responsive to the users actions, it (ofc it´s only a UI-thread) shall not handle the transmission and evaluation in it´s own thread.
I want the UI-thread to delegate this work back to the main thread.
In addition: The main thread (My "client" thread) shall manage all actions that go on, like logging in, handle received messages from the server etc... (not window messages)
But I am not sure of how to do this:
1.) Shall I let the UI-Thread Queue an APC to the main thread (but then the main thread does not know about the stuff going on.
2.) May I better use event objects to be waited on and queues to transmit the data from one thread to another?...
Or are there way better options?
For example: I start the client:
1. The client loads data from a file and does some intialization
The client creates a window in a new thread which handles login data input from the user.
The Window Thread shall notifiy and handle the , that has been entered by the user, over to the client.
The Client shall now pack the data and delegate the sending work to another object (e.g. CSingleConnection) which handles sending the data over a network (of course this does not require a new thread, because it can be handle with Overlapped I/O...
One special receiver thread receives the data from the server and handles it back to the client, which - in turn - evaluates the data.
If the data was correct and some special stuff was received from the server, the main thread shall signal the UI thread to close the window and terminate...
The client then creates a new window, which will handle the chatting-UI
The chatting UI thread and the Client thread shall communicate to handle messages to be sent and received...
(Hope this helps to get what I am trying)...
It all depends on what you are prepared to use. If you are developing with Qt, their signals and slots are just the thing to do such a communication. They also supply a network library, so you could easily omit the receiver thread because their network classes do asynchronous communication and will send a signal when you have data, which means your thread does not need to be blocked in the mean time.
If you don't want to use Qt, boost also supplies thread safe signals and slots, but as far as I understand it their slots will be run in the context of the calling thread...
Anyways, I have used Qt sig and slots with great satisfaction for exactly this purpose. I wholeheartedly agree GUI's shouldn't freeze, ever.
I don´t know wether this is good style or not (anwsering Your own question):
But I think I go with Event Objects and two queues (one for the connection between Client and Connection, and one to communicate Client and UI)...