I've a developing a server app which accepts incoming connections from clients and create thread for every client (which connects)
The problem is; I don't know where should I place WaitForMultipleObjects so that I can release the resources of those clients (threads) which are disconnected.
Here is the pseudo structure of how I'm doing this. In my main thread, I'm doing the following:
For (ClientId = 0; ClientId < NumOfClientsSupported; ClientId++){
Wait_for_Incoming_Connection(); // Continuously listening incoming connection requests of clients
thread_handle_array[ClientId] = CreateThread(...); // Create thread for each client
// Do some other stuff
}
// Outside loop
WaitForMultipleObjects(...); // Would like to join unusable (disconnected clients) threads here
Let say, I've 10 clients connected and 1 somehow disconnect (returns to main thread) and now I want to free the resources it has used; for this I've call WaitForMultipleObjects() but I cannot leave loop because I'm listening to other clients connections.
Should I use another thread for just WaitForMultipleObjects() ? Or is their are better way of doing this.
Would appreciate any help
Related
I want to write a simple C++ chat server. Simplifying:
void clientThread(int sock){
// receives data on socket and sends to all other client's
//sockets which are held in a vector, when received data<0 thread is
// finished and client is removed from a vector
}
Main loop:
vector<thread> th;
while(1){
memset(&rcvAddr,0,sizeof(sockaddr_in));
sock=accept(connectSocket,NULL,(socklen_t*)&addrLength);
cout << "client connected from: " << inet_ntoa(rcvAddr.sin_addr)<< endl;
if(sock<0)
continue;
mtx.lock();
clientDescriptors.push_back(sock);
mtx.unlock();
th.pushback(thread(&server::clientThread,this,sock));
}
And I have a problem with the last line. This vector constantly grows, do you know any proper way to manage this? How to spawn these threads? Are there any implemented data structures, or something like this, to manage threads? I read about thread Pool, but I think this does not solve this problem.
One (proper) design may be :
A thread Pool which manages a connections queue
A listening thread which accepting Sockets repeatedly
a Psuedo code may be :
main:
launch thread pool
launch the listening thread
block until server is not neaded
listening thread routine:
while true
accept a client socket
build a task out of the client socket
push the task into the connection queue
the task is the actual function/function object/object which does something meaningfull with the socket, like reading it's content, write result to client, close the socket.
It is going to keep growing, because this is what you do - you create a thread for every connection. Occasionally threads exit, but you never get to removing elements from this vector, since you are not joining threads.
Best thing to do would be to join all joinable threads from the vector automatically, but to my ongoing dismay posix completely lacks this feature - you can only join one thread at a time. Posix arrogantly states that If you believe you need this functionality, you probably need to rethink your application design. - which I do not agree with. On any rate, one thing you can do is to use thread pools - they are going to help you.
I'm writing a forking chat server in C++, with each incoming client being its own process. The server-client interactions are done though normal sockets with ZeroMQ sockets handling message queuing and IPC. The basic problem is that there when the server forks to accommodate the new client, the client's process has a copy of the context (which is what fork does, right?), so when it binds sockets with the context, none of the other clients are aware of the socket. Long story short: how do I get each client thread to have the same context so that they can talk to each other through ZeroMQ?
I've looked at various ways to share the context between processes, and so far I'm found only this one. The problem with that is 1) it uses a thread pool, and from what I understand from what's written there, only 5 threads are created; this server is needs to support at least 256 and thus will have at least that many threads, and 2) it uses ZeroMQ for both talking to clients and for backend tasks; I'm limited to using ZeroMQ for backend only.
I've looked at the ZeroMQ mailing list and one message said that fork() is orthogonal to how ZeroMQ works. Does this mean that I can't share my context across forked child processes? If that's the case, how do I share the context across multiple processes while keeping in mind the requirement of supporting at least 256 clients and using ZeroMQ for only backend?
EDIT: Cleared up the thread/process confusion. Sorry about that.
EDIT2: The reason why I'm also favoring forking over threads is that I'm used to having a main process that accepts an incoming socket connection then forks, giving the new socket to the child. I'm not sure how to do that in a threading fashion (not really well-practiced, but not totally out of my league)
EDIT3: So, starting to rewrite this with threads. I guess this is the only way?
EDIT4: For further clarification, incoming connections to the server can be either TCP or UDP and I have to handle which type it is when the client connects, so I can't use a ZeroMQ socket to listen in.
Context Sharing
The reason to share ZMQ context in the example code from your link is, that the server(main()) uses inproc socket to communicate with workers(worker_routine()). Inproc sockets cannot communicate with each other unless they are created from the same ZMQ context, even they settle in the same process. In your case, I think it's not necessary to share it since no inproc sockets are supposed to be used. So, your code might look like:
void *worker_routine (void *arg)
{
// zmq::context_t *context = (zmq::context_t *) arg; // it's not necessary for now.
zmq::context_t context(1); // it's just fine to create a new context
zmq::socket_t socket (context, ZMQ_REP);
// socket.connect ("inproc://workers"); // inproc socket is useless here.
socket.connect("ipc:///tmp/workers"); // need some sockets who can cross process.
// handling code omitted.
}
int main ()
{
// omitted...
// workers.bind ("inproc://workers"); // inproc socket is useless here.
workers.bind("ipc:///tmp/workers");
// Launch pool of worker processes
for (int i = 0; i < 5; ++i) {
if (fork() == 0) {
// worker process runs here
worker_routine(NULL);
return 0;
}
}
// Connect work processes to client process via a queue
zmq::proxy (clients, workers, NULL);
return 0;
}
handling process per request
And now to talk about your requirement, one process per request. The last example code is just intended to illustrate the usage of zmq::proxy, which is provided to simplify the server code with ROUTER-DEALER pattern. But it can't fulfill your requirement. So, you have to implement it manually. It just looks like another example. The difference is that you need to invoke fork() when frontend socket is readable and put the while loop into sub process.
if (items[0].revents & ZMQ_POLLIN) {
if (fork() == 0) {
// sub process runs here
while (1) {
// forward frames here
}
// sub process ends here
return 0;
}
}
Suggestion
At the end, I have to say, it's too much heavy to create a process for one request exactly unless your scenario is really special. Please use thread, or consider the asynchronous IO like zmq::poll.
My question is about usage of threads. Im making an application that connects to a device over TCP/IP. Im using boost::asio lib. I have decided to use a read or listening thread and a write thread for listening and writing to the device respectively. My confusion is should the function creating the socket that handles the communication also be a thread.
Thanks :)
In my client class, I create 2 worker threads to handle sending and receiving messages which are used for multiple connections to multiple servers. The thread that creates those 2 worker threads happens to be the user interface thread. This is what my code looks like:
// Create the resolver and query objects to resolve the host name in serverPath to an ip address.
boost::asio::ip::tcp::resolver resolver(*IOService);
boost::asio::ip::tcp::resolver::query query(serverPath, port);
boost::asio::ip::tcp::resolver::iterator EndpointIterator = resolver.resolve(query);
// Set up an SSL context.
boost::asio::ssl::context ctx(*IOService, boost::asio::ssl::context::tlsv1_client);
// Specify to not verify the server certificiate right now.
ctx.set_verify_mode(boost::asio::ssl::context::verify_none);
// Init the socket object used to initially communicate with the server.
pSocket = new boost::asio::ssl::stream<boost::asio::ip::tcp::socket>(*IOService, ctx);
//
// The thread we are on now, is most likely the user interface thread. Create a thread to handle all incoming socket work messages.
if (!RcvThreadCreated)
{
WorkerThreads.create_thread(boost::bind(&SSLSocket::RcvWorkerThread, this));
RcvThreadCreated = true;
WorkerThreads.create_thread(boost::bind(&SSLSocket::SendWorkerThread, this));
}
// Try to connect to the server. Note - add timeout logic at some point.
boost::asio::async_connect(pSocket->lowest_layer(), EndpointIterator,
boost::bind(&SSLSocket::HandleConnect, this, boost::asio::placeholders::error));
The worker threads handle all socket I/O. It depends on what you are doing, but the 2 worker threads for servicing the socket will need to be created from another thread. That other thread can be the user interface thread or main thread if you want since it will be returning pretty quickly. If you have multiple connections to servers or clients, then It is up to you to decide whether or not you want more than one set of threads to service them.
That depends on whether you want to read and write at the same time. In that case you would need one thread for reading and one for writing, but you would have to properly synchronize those in case both streams to and from the device have something to do with each other (what they probably do). However, talking to a device sounds to me like a task where you establish a connection, send some request, wait for and read the answer, send another request, wait for and read the next answer, etc. In that case using just one thread is sufficient and makes your life a lot easier.
Hi I am working on an assignment writing multi threaded client server.
So far I have done is open a socket in a port and forked two thread for listening and writing to client. But I need to connect two type of clients to the server and service them differently. My question is what would be my best approach?
I am handling connection in a class which has a infinite loop to accept connection. When ever a connection is accepted, the class create two thread to read and write to client? Now if I wnat to handle another client of different type, what should we do?
Do I need to open another port? or is it possible to service through same port? May be if it is possible to identify the type of client in the socket than I can handle messages differently.
Or do you suggest like this?
Fork two thread for two type of client and monitor inbound connection in each thread in different port.
when a connection accepted each thread spawn another two thread for listening and writing.
please make a suggestion.
Perhaps you'll get a better answer from a Unix user, but I'll provide what I know.
Your server needs a thread that opens a 'listening' socket that waits for incoming connections. This thread can be the main thread for simplicity, but can be an alternate thread if you are concerned about UI interaction, for example (in Windows, this would be a concern, not sure about Unix). It sounds like you are at least this far.
When the 'listening' socket accepts a connection, you get a 'connected' socket that is connected to the 'client' socket. You would pass this 'connected' socket to a new thread that manages the reading from and writing to the 'connected' socket. Thus, one change I would suggest is managing the 'connected' socket in a single thread, not two separate threads (one for reading, one for writing) as you have done. Reading and writing against the same socket can be accomplished using the select() system call, as shown here.
When a new client connects, your 'listening' socket will provide a new 'connected' socket, which you will hand off to another thread. At this point, you have two threads - one that is managing the first connection and one that is managing the second connection. As far as the sockets are concerned, there is no distinction between the clients. You simply have two open connections, one to each of your two clients.
At this point, the question becomes what does it mean to "service them differently". If the clients are expected to interact with the server in unique ways, then this has to be determined somehow. The interactions could be determined based on the 'client' socket's IP address, which you can query, but this seems arbitrary and is subject to network changes. It could also be based on the initial block of data received from the 'client' socket which indicates the type of interaction required. In this case, the thread that is managing the 'connected' socket could read the socket for the expected type of interaction and then hand the socket off to a class object that manages that interaction type.
I hope this helps.
You can handle the read-write on a single client connection in one thread. The simplest solution based on multiple-threads will be this:
// C++ like pseudo-code
while (server_running)
{
client = server.accept();
ClientHandlingThread* cth = CreateNewClientHandlingThread(client);
cth->start();
}
class ClientHandlingThread
{
void start()
{
std::string header = client->read_protocol_header();
// We get a specific implementation of the ProtocolHandler abstract class
// from a factory, which create objects by inspecting some protocol header info.
ProtocolHandler* handler = ProtocolHandlerFactory.create(header);
if (handler)
handler->read_write(client);
else
log("unknown protocol")
}
};
To scale better, you can use a thread pool, instead of spawning a new thread for each client. There are many free thread pool implementations for C++.
while (server_running)
{
client = server.accept();
thread_pool->submit(client);
cth->start();
}
The server could be improved further by using some framework that implements the reactor pattern. They use select or poll functions under the hood. You can use these functions directly. But for a production system it is better to use an existing reactor framework. ACE is one of the most widely known C++ toolkits for developing highly scalable concurrent applications.
Different protocols are generally serviced on different ports. However, you could service both types of clients over the same port by negotiating the protocol to be used. This can be as simple as the client sending either HELO or EHLO to request one or another kind of service.
In my program there is one thread (receiving thread) that is responsible for receiving requests from a TCP socket and there are many threads (worker threads) that are responsible for processing the received requests. Once a request is processed I need to send an answer over TCP.
And here is a question. I would like to send TCP data in the same thread that I use for receiving data. This thread after receiving data usually waits for new data in select(). So once a worker thread finished processing a request and put an answer in the output queue it has to signal the receiving thread that there are data to send. The problem is that I don't know how to cancel waiting in select() in order to get out of waiting and to call send() .
Or shall I use another thread solely for sending data over TCP?
Updated
MSalters, Artyom thank you for you answers!
MSalters, having read your answer I found this site: Winsock 2 I/O Methods and read about WSAWaitForMultipleEvents(). My program in fact must work both on HP-UX and Windows I finally decided to use the approach that had been suggested by Artyom.
You need to use something similar to safe-pipe trick, but in your case you need to use a pair of connected TCP sockets.
Create a pair of sockets.
Add one to the select and wait on it as well
Notify by writing to other socket from other threads.
Select is immediately waken-up as one of the sockets is readable, reads all the
data in this special socket and check all data in queues to send/recv
How to create pair of sockets under Windows?
inline void pair(SOCKET fds[2])
{
struct sockaddr_in inaddr;
struct sockaddr addr;
SOCKET lst=::socket(AF_INET, SOCK_STREAM,IPPROTO_TCP);
memset(&inaddr, 0, sizeof(inaddr));
memset(&addr, 0, sizeof(addr));
inaddr.sin_family = AF_INET;
inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
inaddr.sin_port = 0;
int yes=1;
setsockopt(lst,SOL_SOCKET,SO_REUSEADDR,(char*)&yes,sizeof(yes));
bind(lst,(struct sockaddr *)&inaddr,sizeof(inaddr));
listen(lst,1);
int len=sizeof(inaddr);
getsockname(lst, &addr,&len);
fds[0]=::socket(AF_INET, SOCK_STREAM,0);
connect(fds[0],&addr,len);
fds[1]=accept(lst,0,0);
closesocket(lst);
}
Of course some checks should be added for return values.
select is not the native API for Windows. The native way is WSAWaitForMultipleEvents. If you use this to create an alertable wait, you can use QueueUserAPC to instruct the waiting thread to send data. (This might also mean you don't have to implement your own output queue)
See also this post:
How to signal select() to return immediately?
For unix, use an anonymous pipe. For Windows:
Unblocking can be achieved by adding a dummy (unbound) datagram socket to fd_set and then closing it. To make this thread safe, use QueueUserAPC:
The only way I found to make this multi-threadsafe is to close and recreate the socket in the same thread as the select statement is running. Of course this is difficult if the thread is blocking on the select. And then comes in the windows call QueueUserAPC. When windows is blocking in the select statement, the thread can handle Asynchronous Procedure Calls. You can schedule this from a different thread using QueueUserAPC. Windows interrupts the select, executes your function in the same thread, and continues with the select statement. You can now in your APC method close the socket and recreate it. Guaranteed thread safe and you will never loose a signal.
The typical model is for the worker to handle its own writing. Is there a reason why you want to send all the output-IO through selecting thread?
If you're sure of this model, you could have your workers send data back to the master thread using file descriptors as well (pipe(2)) and simply add those descriptors to your select() call.
And, if you're especially sure that you're not going to use pipes to send data back to your master process, the select call allows you to specify a timeout. You can busy-wait while checking your worker threads, and periodically call select to figure out which TCP sockets to read from.
Another quick&dirty solution is to add localhost sockets to the set. Now use those sockets as the inter-thread communication queues. Each worker thread simply sends something to its socket, which ends up on the corresponding socket in your receiving thread. This wakes up the select(), and your receiving thread can then echo the message on the appropriate outgoing socket.