The native C socket API returns on accept() a new socket descriptor, which is bound to a certain remote socket. That's good because I can create a thread, pass the socket and establish a point-to-point, or better a thread-to-thread connection over the internet. And that's exactly what I want: one thread from the client should be connected to a destined thread on the server. Hence I dont need a workerpool or loadbalancing not even async operation. The server threads save history. ZeroMQ seems great but as far as I understood it does not split up sockets on accept.
Is there a way to establish such an synchronous thread-to-thread connection with ZerMQ?
You're asking how to replicate a particular solution (handing off a socket to a thread) to a broader problem (how to write scalable servers).
The 'one thread per socket' design only works in one pattern which is request-reply, e.g. HTTP. Whereas the really high volume use cases are for data distribution (publish-subscribe), or task distribution (pipeline). Neither fit a 1-to-1 model.
It is a common error when you learn a new tool to ask, "how does this tool do what my old tools do" but you won't get good results like that. Instead, take the time to actually learn how the tool works, and then use that knowledge to re-think your problems and the best solutions for them.
I thought Zmq handle this multi connection for you; I prefer to create a thread-to-thread communication by handling connection within thread callback function, This mean my main zmq connection created in separate thread; which can make separate connection control within threads.
Related
The program is a client server socket application being developed with C on Linux. There is a remote server to which each client connects and logs itself as being online. There will be most likely be several clients online at any given point of time, all trying to connect to the server to log themselves as being online/busy/idle etc. So how can the server handle these concurrent requests. What's a good design approach (Forking/multithreading for each connection request maybe?)?
personally i would use the event driven approach for servers. there you register a callback that is called as soon as a connection arrives. and event callbacks whenever the socket is ready to read or write.
with a huge amount of connections you will have a great performance and resource benefit compared to threads. But i would also prefere this for a smaler count of connections.
i only would use threads if you really need to use multiple cores or if you have some request that could take longer to process and where it is too complicate to handle it without threads.
i use libev as base library to handle event driven networking.
Generally speaking, you want a thread pool to service requests.
A typical structure will start with a single thread that does nothing but queue up incoming requests. Since it doesn't do very much, it's typically pretty easy for one thread to keep up with the maximum speed of the network.
That puts the items into some sort of concurrent queue. Then you have a pool of other threads reading items from the queue, doing what's needed, then depositing the result in another queue (and repeating, and repeating until the servers shuts down).
Finally, you have another single thread that just takes items from the result queue, and sends replies out to the clients.
Best approach is a combination of event driven model with multithreaded model.
You create a bunch of nonblocking sockets, but threads count should be much fewver. I.e. 10 sockets per thread.
Then you just listen for an event (incoming request) on every thread in a non-blocking mode and process it as it happens.
This technique usually performs better then non-blocking sockets or multithreaded model separately.
Take a look at Comer's "Internetworking with TCP/IP" volume 3 (BSD sockets version), it has detailed examples for different ways of writing servers and clients. The full code (sans explanations, unfortunally) is on the web. Or rummage around in http://tldp.org, there you'll find a collection of tutorials.
select or poll or epoll
These are facilities on *nix systems to aggregate multiple event sources (connections) into a single waiting point. The server adds the connections to a data structure, and then waits by calling select etc. It gets woken up when stuff happens on any of these connections, figures out which one, handles it, and then goes back to sleep. See manual for details.
There are several higher level libraries built on top of these mechanisms, that make programming them somewhat easier e.g. libevent, libev etc.
Looking for the best approach to sending the same message to multiple destinations using TCP/IP sockets. I'm working with an existing VS 2010 C++ application on Windows. Hoping to use a standard library/design pattern approach that has many of the complexities already worked out if possible.
Here's one approach I'm thinking about.. One main thread retrieves messages from a database and adds them to some sort of thread safe queue. The application also has one thread for each client socket connection to some destination server. Each one of these threads would read from the thread safe queue, and send the message over a tcp/ip socket.
There may be better/simpler/more robust approaches than this one though..
The issues I have to be concerned about mostly are latency. The destinations could be anywhere, and there may be significant latency between one socket connection and another.
The messages must go in an exact FIFO order to all the destinations.
Also one destination will be considered the primary destination.. all messages must get to this destination, no exceptions. For the other destinations, i.e. non-primary, the messages are just copies and it's not absolutely critical if the non-primary destinations do not receive a few messages. At any point, one of the non-primary destinations could become the primary destination. If one of the destinations falls too far behind, then that thread would need to catch up to the primary destination, but skipping some messages.
Looking for any suggestions. Preliminary research so far, my situation appears to be something akin to a single producer and multiple consumers pattern, or possibly master-worker pattern in Java.
I need to implement this in C++ on Windows, and the application must use tcp/ip sockets using an existing defined protocol.
Any help at all would be greatly appreciated.
You need exactly two threads, one that saturates the IO channel to the database and another that saturates the IO channel to the network leading to the 12 servers. Unless you have multiple network interfaces (which you should think about!) you don't send things faster by using multiple threads. Also, since you don't have multiple threads taking care of the network, you don't have to sync them.
What you definitely need to know about is select(). In the case of WinSock, also take a look at WSAEventSelect/WaitForMultipleObjects. Basically, you take a message from the queue and then send it to all clients when they're ready. select() tells you when one of a set of sockets is ready to accept data, so you don't waste time waiting or block trying to send data. What you need to come up with is a schema to reconnect after broken connections, when to drop messages to lagging clients etc. Also, in case the throughput to the different targets varies a lot, you need to think about handling multiple messages in parallel. If they are small (less than a network packet's payload) it makes sense combining them anyway to avoid overhead.
I hope this short overview helps getting you started, otherwise I can elaborate on the details.
I'm new to Poco framework and not to good with C++ but I am learning. I have to create a server-client based application in windows.
The problem that I have now is that I need to send repeatedly from minute to minute some data to the clients. i need to do this for the clients that have an active tcp connection with the server. I don't know how can I create an event, or something that is triggered in a thread and starts all the active threads to send data to the clients.
My first idea is that I have to rewrite, or extend the TCPServerDispatcher Class. And I don't know how can I identify the active threads from the ThreadPool.
Do you have any ideas, or maybe suggestions, or a tutorial, something?
I can't figure it out how to do it...
Hope somebody can give me an idea, or some code example. Thank you.
Can these server<> client threads not obtain the data for themselves? It would be fairly easy to add a 60-second timeout on a read() in each thread and send the data then. Maybe this would involve too many database connections?
Failing that, can you put the latest data in a lockable object and have the threads just lock, write and unlock the latest data on a timeout? Such a solution should really have a write timeout as well to prevent a badly-behaved client causing its server thread to block while holding the lock. If it's not too large, I suppose the server<> client thread could make a copy of the data to send, but I'm not a great fan of copying, TBH.
There are more complex ways of signaling the server<> client threads that new data is avalable. It is quite possible to signal each thread that new data is available and have them act upon it 'immediately'. This usually means the server<> client thread waiting on more than one signal. In general, the lower the latency, the more complex the solution:(
Rgds,
Martin
At work I have been tasked with implementing a TCP server as part of a Modbus slave device. I have done a lot of reading both here on stack exchange and on the internet in general (including the excellent http://beej.us/guide/bgnet/) but I am struggling with a design issue. In summary, my device can accept just 2 connections and on each connection will be incoming modbus requests which I must process in my main controller loop and then reply with success or failure status. I have the following ideas of how to implement this.
Have a listener thread that creates, binds, listens and accepts connections, then spawns a new pthread to listen on the connection for incoming data and close connection after an idle timeout period. If the number of active threads is currently 2, new connections are instantly closed to ensure only 2 are allowed.
Do not spawn new threads from the listener thread, instead use select() to detect incoming connection requests as well as incoming modbus connects on active connections (similar to the approach in Beejs guide).
Create 2 listener threads each of which creates a socket (same IP and port number) which can block on accept() calls, then close the socket fd and deal with the connection. Here I am (perhaps naively) assuming that this will only allow max of 2 connections which I can deal with using blocking reads.
I have been using C++ for a long time but I am fairly new to Linux development. I would really welcome any suggestions as to which of the above approaches is best (if any) and if my inexperience with Linux means that any of them are really really bad ideas. I am keen to avoid fork() and stick to pthreads as incoming modbus requests are going to be queued and read off a main controller loop periodically. Thanks in advance for any advice.
The third alternative won't work, you can only bind to the local address once.
I would probably use your second alternative, unless you need to do a lot of processing in which case a combination of the first to alternatives might be useful.
The combination of the two first alternative I'm thinking of is to have the main thread (the one you always have when a program starts) create two worker threads, then go a blocking accept call to wait for a new connection. When a new connection arrives, tell one of the threads to start working on the new connection and go back to block on accept. When the second connection is accepted you tell the other thread to work on that connection. If both connections are open already, either don't accept until one connection is closed, or wait for new connections but close them immediately.
All of the design option you propose are not very object oriented, and they're all geared more towards C than C++. If your work allows you to use boost, then the Boost.Asio library is fantastic for making simple (and complex) socket servers. You could take nearly any of their examples and trivially extend it to only allow 2 active connections, closing all others as soon as they are opened.
Off the top of my head, their simple HTTP server could be modified to do this by keeping a static counter in the connection class (inc in the constructor, dec in the destructor), and when a new one is created check the count and decide whether to close the connection. The connection class could also gain a boost::asio::deadline_timer to keep track of timeouts.
This would most closely resemble your first design choice, boost could do this in 1 thread and in the background does something similar to select() (usually epoll()). But this is the "C++ way", and in my opinion using select() and raw pthreads is the C way.
Since you are only dealing with 2 connections, thread per connection is perfect for this kind of application. Object oriented approaches using non-blocking or asynchronous I/O would be better if you needed to scale up to thousands of connections. 2 listener threads makes sense, you don't need to close the accept fd. Just come back to accept on it when the connection is completed. In fact, a variation is to have three threads blocked doing accept. If two of the threads are actively handling connections, then the third resets the newly created connection (or returns busy response, whatever is appropriate for your device).
To have all three threads block on accept, you need to have the main thread create and bind your socket before the three threads launch to do their accept/handle processing.
The man page for pthreads on Linux indicates that accept is thread-safe. (The section under thread-safe functions lists the functions that are not thread-safe, go figure.)
Hi I am working on an assignment writing multi threaded client server.
So far I have done is open a socket in a port and forked two thread for listening and writing to client. But I need to connect two type of clients to the server and service them differently. My question is what would be my best approach?
I am handling connection in a class which has a infinite loop to accept connection. When ever a connection is accepted, the class create two thread to read and write to client? Now if I wnat to handle another client of different type, what should we do?
Do I need to open another port? or is it possible to service through same port? May be if it is possible to identify the type of client in the socket than I can handle messages differently.
Or do you suggest like this?
Fork two thread for two type of client and monitor inbound connection in each thread in different port.
when a connection accepted each thread spawn another two thread for listening and writing.
please make a suggestion.
Perhaps you'll get a better answer from a Unix user, but I'll provide what I know.
Your server needs a thread that opens a 'listening' socket that waits for incoming connections. This thread can be the main thread for simplicity, but can be an alternate thread if you are concerned about UI interaction, for example (in Windows, this would be a concern, not sure about Unix). It sounds like you are at least this far.
When the 'listening' socket accepts a connection, you get a 'connected' socket that is connected to the 'client' socket. You would pass this 'connected' socket to a new thread that manages the reading from and writing to the 'connected' socket. Thus, one change I would suggest is managing the 'connected' socket in a single thread, not two separate threads (one for reading, one for writing) as you have done. Reading and writing against the same socket can be accomplished using the select() system call, as shown here.
When a new client connects, your 'listening' socket will provide a new 'connected' socket, which you will hand off to another thread. At this point, you have two threads - one that is managing the first connection and one that is managing the second connection. As far as the sockets are concerned, there is no distinction between the clients. You simply have two open connections, one to each of your two clients.
At this point, the question becomes what does it mean to "service them differently". If the clients are expected to interact with the server in unique ways, then this has to be determined somehow. The interactions could be determined based on the 'client' socket's IP address, which you can query, but this seems arbitrary and is subject to network changes. It could also be based on the initial block of data received from the 'client' socket which indicates the type of interaction required. In this case, the thread that is managing the 'connected' socket could read the socket for the expected type of interaction and then hand the socket off to a class object that manages that interaction type.
I hope this helps.
You can handle the read-write on a single client connection in one thread. The simplest solution based on multiple-threads will be this:
// C++ like pseudo-code
while (server_running)
{
client = server.accept();
ClientHandlingThread* cth = CreateNewClientHandlingThread(client);
cth->start();
}
class ClientHandlingThread
{
void start()
{
std::string header = client->read_protocol_header();
// We get a specific implementation of the ProtocolHandler abstract class
// from a factory, which create objects by inspecting some protocol header info.
ProtocolHandler* handler = ProtocolHandlerFactory.create(header);
if (handler)
handler->read_write(client);
else
log("unknown protocol")
}
};
To scale better, you can use a thread pool, instead of spawning a new thread for each client. There are many free thread pool implementations for C++.
while (server_running)
{
client = server.accept();
thread_pool->submit(client);
cth->start();
}
The server could be improved further by using some framework that implements the reactor pattern. They use select or poll functions under the hood. You can use these functions directly. But for a production system it is better to use an existing reactor framework. ACE is one of the most widely known C++ toolkits for developing highly scalable concurrent applications.
Different protocols are generally serviced on different ports. However, you could service both types of clients over the same port by negotiating the protocol to be used. This can be as simple as the client sending either HELO or EHLO to request one or another kind of service.