I have a server interacting with multiple clients where the client send messages to the server and the server reads them via recv() method. The problem I getting is that Im using waitforsingleobject(handler, 10000 millisecs) in order to make the server wait for a few seconds to interact with one client and then let others access to it but then I start seeing answer from the server with the wrong message to the client and getting blocked. So looks like a synchronization issue.
So my question is (since I'm a begginer in c++) how could I ensure that every incoming message is received and replied to the right client, allowing all the clients interact with the server.
There're two alternatives.
First is a pretty standard model - one thread per one client. When a client connects, you start a thread to handle it.
Second approach doesn't require many threads. You should use WSARecv() on an overlapped socket instead of recv(). This way, you can simultaneously open multiple receive operations, one per client, and wait them all in a WaitForMultipleObjects(). To be specific, you will wait on event inside WSAOVERLAPPED. Remember that WaitForMultipleObjects() has a limit on number of wait objects. When exceeded, you will need to run another thread. The return code from WaitForMultipleObjects() will tell you which client has sent data, so you can reply to it.
Or, as suggested above, you could probably use select() to figure out which socket has data.
Related
I am working on a module which uses 10 queues to handle threads and each of them send curl requests using curl_easy interface (along with Lock) so that; a single connection is maintained till the response is not received. I want to enhance request handling by using curl_multi interface where curl requests are sent by the thread and handled in parallel fashion.
I have created a separate code to implement it. I created 3 threads for instance, being handled one by one, the first thread sends request to curl_multi till it's running and there are transfers existing, which allocates resources using curl_easy interface for each transfer.
I have gone through a lot of examples but cannot figure out how to implement it in C++. Also because I have recently learnt multi threading and curl concepts in C++ I need assistance with the approach.
I expect a single thread should be able to send curl requests till the user doesn't stop sending.
Update - I have added two threads and each sends two requests simultaneously. curl_multi is being handled by an array of handles, for curl_easy.
I want to keep it free of arrays because that is limiting the number of requests.
Can it be made asynchronous and accept all transfers and exit only when the client/user does. There are enough examples of curl_multi therefore I am not clear of its implementation.
Reading the curl_multidocumentation, it doesn't seem as you have to create different threads for this, as it works via your multiple easy handles added to the multi handle object. You then call curl_multi_perform to start all transfers in a non-blocking way.
I expect a single thread should be able to send curl requests till the user doesn't stop sending.
I don't understand what you mean by this, do you mean that you just want to keep those connections alive until everything is transferred? If so, curl_multi already gives you info on the progress of your transfers which can help you determine what to do.
Hope it helps
I am new to network programming and the usage of Boost Asio library.
I successfully implemented a task for my requirement by modifying the Boost Asio "Blocking TCP Echo Server and Client" which performs transactions of operations between my Client and Server.
Now, I have a requirement where I need to connect multiple Clients with my Server.
I found some relevant links suggesting the usage of async_accept at the Server side.
So, I tried running the Boost Asio example: "Async TCP Echo Server" with the "Blocking TCP Echo client", where the server distinguishes the different clients and addresses them accordingly.
But, my actual requirement should be like, instead of the Server completing the entire process for one Client, it [the server] has to perform same operations for the first client then go to the second client and perform those operations and then again come back to the first client and continue in this order until all operations are complete.
Is there any way or idea which could help me perform this flow using Boost Asio? Also I'm just using the "Blocking TCP Echo Client", which just has a normal connect() and not an async_connect(), now is that a problem?
Also, is it possible to communicate between multiple clients through the server using Boost Asio?
Thanking you very much in advance!
There are 2 models to handling multiple client concurrently on the server.
The one is to spawn a new thread for each client and then each thread handles each client synchronously. The second model is to use asynchronous APIs on a single thread all operating on a single service. When the accept completes, you then create a new worker thread and start the worker off the send and recv required by your protocol. You main thread goes back the accepting new connections.
With async, you prime the pump with an async accept and the call io_service run. When the accept completes, your callback runs. You now prime the pump again with further accepts (for more client) start async send and recv for the newly created client. Since all sends and recvs are non-blocking, the only time your thread sleeps is when it has nothing to do. Otherwise the io_service run method takes care of everything for you.
If you are blocking on sends and recvs, through, you cannot process more than one client concurrently.
I am trying to write a multi chat server without using threads. I came across select() but I am having hard time understanding how can I read a client request and send it right away when the send() blocks and the client socket might not be ready to be written and by this losing the server parallel io ability.
If(fd_isset(socket,&read_fds){
Recv()
SendMesgToRequestedClient()
}
I thought a possible solution is to save each client a list of pending messages and send them on fd_isset(socket,&write_fds) but then instead of saving CPU I might use a ton of memory.
I have a few question about socket in c++!
First question, let's say that he writes a server for the game in which he will play 200 people at once, but accept is blocked because he already serves one client, how to deal with it?
Second question, how to download a list of all currently connected clients, so that you can then send a message to everyone?
I have a few question about socket in c++!
For future reference, please post only one question at a time. If you have multiple questions, post them separately.
let's say that he writes a server for the game in which he will play 200 people at once, but accept is blocked because he already serves one client, how to deal with it?
Use sockets in non-blocking mode, using select()/(e)poll() or other callback mechanisms to know which sockets have pending activity and when.
Otherwise, use accept() in a separate thread than other thread(s) used to service connected clients.
how to download a list of all currently connected clients, so that you can then send a message to everyone?
The server is responsible for keeping track of its connected clients. Then it can loop through that list when needed.
If a client wants to send a message to every other client, the best option is for it to send a single message to the server and ask the server to relay the message to every other client.
Otherwise, the client would have to request the list from the server, and then send a message to every other client individually.
I need a client networking thread to be able to respond both to new messages to be transmitted, and the receipt of new data on the network. I wish to avoid this thread performing a polling loop, but rather to process only as needed.
The scenario is as follows:
A client application needs to communicate to a server via a protocol that is largely, but not entirely, synchronous. Typically, the client sends a message to the server and blocks until a response is received.
The server may process client requests asynchronously, in which case the response to client
is not a result, but a notification that processing has begun. A result message is sent to to the client at some point in the future, when the server has finish processing the client request.
The asynchronous result notifications can arrive at the client at any time. These notifications need processed when they are received i.e. it is not possible to process a backlog only when the client transmits again.
The clients networking thread receives and processes notifications from the server, and to transmit outgoing messages from the client.
To achieve this, I need to to make a thread wake to perform processing either when network data is received OR when a message to transmit is enqueued into an input queue.
How can a thread wake to perform processing of an enqueued work item OR data from a socket?
I am interested primarily in using the plain Win32 APIs.
A minimal example or relevant tutorial would be very welcome!
An alternative to I/O Completion Ports for sockets is using WSAEventSelect to associate an event with the socket. Then as others have said, you just need to use another event (or some sort of waitable handle) to signal when an item has been added to your input queue, and use WaitForMultipleObjects to wait for either kind of event.
You can set up an I/O Completion Port for the handles and have your thread wait on the completion port:
http://technet.microsoft.com/en-us/sysinternals/bb963891.aspx
Actually, you can have multiple threads wait on the port (one thread per processor usually works well).
Following on from Michael's suggestion, I have some free code that provides a framework for IO Completion Port style socket stuff; and it includes an IOCP based work queue too. You should be able to grab some stuff from it to solve your problem from here.
Well, if both objects have standard Windows handles, you can have your client call WaitForMultipleObjects to wait on them.
You might want to investiate splitting the servicing of the network port off onto its own thread. That might simplify things greatly. However, it won't help if you just end up having to synchonize something else between that new thread and your main one.