How to get a Win32 Thread to wait on a work queue and a socket? - c++

I need a client networking thread to be able to respond both to new messages to be transmitted, and the receipt of new data on the network. I wish to avoid this thread performing a polling loop, but rather to process only as needed.
The scenario is as follows:
A client application needs to communicate to a server via a protocol that is largely, but not entirely, synchronous. Typically, the client sends a message to the server and blocks until a response is received.
The server may process client requests asynchronously, in which case the response to client
is not a result, but a notification that processing has begun. A result message is sent to to the client at some point in the future, when the server has finish processing the client request.
The asynchronous result notifications can arrive at the client at any time. These notifications need processed when they are received i.e. it is not possible to process a backlog only when the client transmits again.
The clients networking thread receives and processes notifications from the server, and to transmit outgoing messages from the client.
To achieve this, I need to to make a thread wake to perform processing either when network data is received OR when a message to transmit is enqueued into an input queue.
How can a thread wake to perform processing of an enqueued work item OR data from a socket?
I am interested primarily in using the plain Win32 APIs.
A minimal example or relevant tutorial would be very welcome!

An alternative to I/O Completion Ports for sockets is using WSAEventSelect to associate an event with the socket. Then as others have said, you just need to use another event (or some sort of waitable handle) to signal when an item has been added to your input queue, and use WaitForMultipleObjects to wait for either kind of event.

You can set up an I/O Completion Port for the handles and have your thread wait on the completion port:
http://technet.microsoft.com/en-us/sysinternals/bb963891.aspx
Actually, you can have multiple threads wait on the port (one thread per processor usually works well).

Following on from Michael's suggestion, I have some free code that provides a framework for IO Completion Port style socket stuff; and it includes an IOCP based work queue too. You should be able to grab some stuff from it to solve your problem from here.

Well, if both objects have standard Windows handles, you can have your client call WaitForMultipleObjects to wait on them.
You might want to investiate splitting the servicing of the network port off onto its own thread. That might simplify things greatly. However, it won't help if you just end up having to synchonize something else between that new thread and your main one.

Related

Handling multiple clients simultaneously in C++ UDP server

I have developed a C++ UDP based server application and I am in the process of implementing code to handle multiple clients simultaneously .
I have the following understanding regarding how to handle multiple clients and want to fill in the knowledge gaps
My step wise understanding is as mentioned below
UDP server listens at a specific port(say xxxx)
The server has a message queue .It can be array or linked list or Queue or anything for that matter
As soon as a request arrives at the port xxxx, its placed in the message queue
After putting it in the message queue a new thread(let us call it worked thread) is spawned and it picks up the queued message and the same is removed from the message queue
The worked thread knows about the clients IP:port from the message header
The worker thread processes the request and sends the response to the clients IP:port
The clients gets the response and the worker thread terminates.
Steps 3 to 7 take care of multiple client being handled simultaneously.
Is my understanding sufficient ? Where do I need improvement?
Thanks in advance
The clients gets the response and the worker thread terminates.
The worker thread should terminate when it completes processing. There is no practical way for it to wait for an acknowledgement from the client.
The worker thread processes the request and sends the response to the clients IP:port
I think it will be better to place the response on a queue. The main server thread can check the queue and send any responses found there. This prevents race conditions when two worker threads overlap in their attempts to send responses.
The server has a message queue .It can be array or linked list or Queue or anything for that matter
It pretty much has to be a queue. The interesting question is what queue priority. Initially FIFO would do. If your server becomes overloaded, then you need to consider alternatives. Perhaps it would be good to estimate the processing time required, and do the fast ones first. Or perhaps different clients deserve different priorities.
After putting it in the message queue a new thread(let us call it worked thread) is spawned
This is fine initially. However, you will want to do some time profiling and determine if a thread pool would be advantageous.
Deeper Discussion of threading issues
The job processing must be done in a separate worker thread, so that a long job will not block the server from accepting connections from other clients. However, you should consider carefully whether or not you want to use multiple worker threads. Since you are placing the job requests on a queue, a single worker thread can be used to process them one by one.
PRO single thread
Simpler, more reliable code. The processing code must be thread safe for context switches back to the main thread. However, there will not be any context switches between job processing code. This makes it easier to design and debug the processing code. For example, if the jobs are updating a database, then you do not require any extra code to ensure the database is always consistent - just that consistency is guaranteed at the end of each job process.
Faster response for short jobs. If there are many short jobs submitted at the same time, your CPU can spend more cycles switching between jobs than actually doing useful processing.
CON single thread
A big job will block other jobs until it completes.

Boost Asio TCP Server Handling multiple clients

I am new to network programming and the usage of Boost Asio library.
I successfully implemented a task for my requirement by modifying the Boost Asio "Blocking TCP Echo Server and Client" which performs transactions of operations between my Client and Server.
Now, I have a requirement where I need to connect multiple Clients with my Server.
I found some relevant links suggesting the usage of async_accept at the Server side.
So, I tried running the Boost Asio example: "Async TCP Echo Server" with the "Blocking TCP Echo client", where the server distinguishes the different clients and addresses them accordingly.
But, my actual requirement should be like, instead of the Server completing the entire process for one Client, it [the server] has to perform same operations for the first client then go to the second client and perform those operations and then again come back to the first client and continue in this order until all operations are complete.
Is there any way or idea which could help me perform this flow using Boost Asio? Also I'm just using the "Blocking TCP Echo Client", which just has a normal connect() and not an async_connect(), now is that a problem?
Also, is it possible to communicate between multiple clients through the server using Boost Asio?
Thanking you very much in advance!
There are 2 models to handling multiple client concurrently on the server.
The one is to spawn a new thread for each client and then each thread handles each client synchronously. The second model is to use asynchronous APIs on a single thread all operating on a single service. When the accept completes, you then create a new worker thread and start the worker off the send and recv required by your protocol. You main thread goes back the accepting new connections.
With async, you prime the pump with an async accept and the call io_service run. When the accept completes, your callback runs. You now prime the pump again with further accepts (for more client) start async send and recv for the newly created client. Since all sends and recvs are non-blocking, the only time your thread sleeps is when it has nothing to do. Otherwise the io_service run method takes care of everything for you.
If you are blocking on sends and recvs, through, you cannot process more than one client concurrently.

C++ IO/Multiplexed TCP Server and POSIX Threads

I must develop a simple C++ command line client/server chat application. This application must provide a basic multiple two-partecipants chat-room implementation. Is it possible to combine IO/Multiplexing (select() syscall) with POSIX threads?
I mean I want to create a TCP server which handles multiple clients with select() and when a client wants to chat with another one the servewr creates a separate thread , that uses IO/Multiplexing (select() syscall) , to handle the communication between the two clients.
Is this a good idea? How could I do otherwise?
A crude attempt at an architecture...
Structure your application as two sets of threads (a set might be composed of just one thread).
One set minds the TCP connections, each TCP connection is assigned to one of the threads in the set, the thread just runs forever polling the connections assigned to it (incoming messages) and polling a (per-thread) from-logic queue (outgoing messages)
The other set minds the logic/session. Each session is assigned to a specific thread. Each thread just runs forever polling the (per-thread) from-network queue (incoming messages).
The network thread-set, receives messages and post them to the right logic queue [assumes there's a way of mapping connections to internal logic sessions]. It polls its from-logic queue to get the outgoing messages and send them.
The number of network threads is bound, and it does not depend on the number of connections.
The logic thread-set, receives requests from the network in its queue and handles them within a given session state and (perhaps) post back messages to the be sent out (sent out by the network threads)
The number of logic threads is bound, and it does not depend on the number of sessions.

how synchronize recv() when multithreading cpp CRT

I have a server interacting with multiple clients where the client send messages to the server and the server reads them via recv() method. The problem I getting is that Im using waitforsingleobject(handler, 10000 millisecs) in order to make the server wait for a few seconds to interact with one client and then let others access to it but then I start seeing answer from the server with the wrong message to the client and getting blocked. So looks like a synchronization issue.
So my question is (since I'm a begginer in c++) how could I ensure that every incoming message is received and replied to the right client, allowing all the clients interact with the server.
There're two alternatives.
First is a pretty standard model - one thread per one client. When a client connects, you start a thread to handle it.
Second approach doesn't require many threads. You should use WSARecv() on an overlapped socket instead of recv(). This way, you can simultaneously open multiple receive operations, one per client, and wait them all in a WaitForMultipleObjects(). To be specific, you will wait on event inside WSAOVERLAPPED. Remember that WaitForMultipleObjects() has a limit on number of wait objects. When exceeded, you will need to run another thread. The return code from WaitForMultipleObjects() will tell you which client has sent data, so you can reply to it.
Or, as suggested above, you could probably use select() to figure out which socket has data.

Signaling all active threads (Windows)

I am faced with a design issue regarding thread synchronization in C++, Windows.
I am writing a server application that starts one listening thread, which should stay active the whole time while the server is up.
When the listening thread gets a connect request, it opens a CONTROL socket and starts a new control thread.
This thread is used to send control data between server and a client, initializing server and all the background software to specific client data and starting data processing.
If the initialization (via control socket) is successful, the control thread will open a new socket, DATA socket, which is then used to pass data from server to client. It will also start two new threads, one which is sending on this new, DATA socket, and the other, which is receiving on the CONTROL socket, waiting if the client wants to terminate connection.
When client terminates connection ungracefully, by terminating application without the call to function which sends the server message to close the connection, here is what should happen:
Any of the threads in execution can detect this event. They will get some sort of error (WSAECONNRESET) while sending or receiving on DATA/CONTROL socket and should then signal all the other threads that they should stop executing (except for the server listening thread).
Which is the most natural way to achieve this type of behavior?
(I am using winsock (winsock2.h) for networking, and standard windows api (windows.h) for threading)
If you're writing a multi-threaded winsock server, you should be looking into IO completion ports. Using an IO completion port is the most scalable way to write a network service on the windows platform.
IO completion port based winsock servers use asynchronous communication, so instead of blocking on a socket, your threadpool receives completion packets when something interesting happens.
In any case, you'll be using WSARecv. When WSARecv returns non zero, call WSAGetLastError(). If you don't have WSA_IO_PENDING, then switch on the error and look for the winsock error code you're interested in.
The winsock error code WSA_OPERATION_ABORTED indicates that a socket has closed, although there are others (e.g. WSAECONNABORTED).
Would suggest a good text on the subject (e.g. Windows via C/C++).
You can use WSAEventSelect() function to associate event object with socket and create one event object for your events, then use these event objects in WaitForMultipleObjects() function, so your thread can wait for socket events and your custom events.