I am new to network programming and the usage of Boost Asio library.
I successfully implemented a task for my requirement by modifying the Boost Asio "Blocking TCP Echo Server and Client" which performs transactions of operations between my Client and Server.
Now, I have a requirement where I need to connect multiple Clients with my Server.
I found some relevant links suggesting the usage of async_accept at the Server side.
So, I tried running the Boost Asio example: "Async TCP Echo Server" with the "Blocking TCP Echo client", where the server distinguishes the different clients and addresses them accordingly.
But, my actual requirement should be like, instead of the Server completing the entire process for one Client, it [the server] has to perform same operations for the first client then go to the second client and perform those operations and then again come back to the first client and continue in this order until all operations are complete.
Is there any way or idea which could help me perform this flow using Boost Asio? Also I'm just using the "Blocking TCP Echo Client", which just has a normal connect() and not an async_connect(), now is that a problem?
Also, is it possible to communicate between multiple clients through the server using Boost Asio?
Thanking you very much in advance!
There are 2 models to handling multiple client concurrently on the server.
The one is to spawn a new thread for each client and then each thread handles each client synchronously. The second model is to use asynchronous APIs on a single thread all operating on a single service. When the accept completes, you then create a new worker thread and start the worker off the send and recv required by your protocol. You main thread goes back the accepting new connections.
With async, you prime the pump with an async accept and the call io_service run. When the accept completes, your callback runs. You now prime the pump again with further accepts (for more client) start async send and recv for the newly created client. Since all sends and recvs are non-blocking, the only time your thread sleeps is when it has nothing to do. Otherwise the io_service run method takes care of everything for you.
If you are blocking on sends and recvs, through, you cannot process more than one client concurrently.
Related
I am looking into possibility of listening on different sockets at once. To handle multiple socket connection at the same fd_set can be used in Linux. I have seen that gRPC also support this functionality with having epoll based pollset.
https://github.com/grpc/grpc/blob/18df25228cfa1f97fc5cca9176fbaef64c0e4221/doc/epoll-polling-engine.md
I intend to call different services in async mode and providing a service at the same time. Therefore, I was thinking about having a poll-set consist of client sockets waiting for async responses and server sockets. It seems to be possible in gRPC. I haven't been able to find anything in gRPC API that exposes construction of a poll-set.
Therefore, my question is how to use this capability of gRPC?
Does gRPC manages this automatically? In that case how can I wait for incoming messages?
The same CompletionQueue should be used for both client and server. To wait for the incoming messages next needs to be invokek.
I have implemented a zmq library using push / pull on windows. There is a server and up to 64 clients running over loopback. Each client can send and receive to the server. There is a thread that waits for each client to connect on a pull zmq socket. Clients can go away at any time.
The server is expected to go down at times and when it comes back up the clients need to reconnect to it.
The problem is that when nothing is connected I have 64 receive threads waiting for a connection. This shows up as a lot of connections in tcpview and my colleagues inform me that this is appearing like a performance/d-dos sort of attack.
So in order to get around that issue I'd like the clients to send some sort of heart beat to the server "hey I'm here" on one socket. However I can't see how to do that with zmq.
Any help would be appreciated.
I think the basic design of having 64 threads on the server waiting for external connections is flawed. Why not have a single 'master' thread binding the socket, which the external clients would connect to?
Internal to the server, you could still have 64 worker threads. Work would be distributed to the worker threads by the master thread. The communication between the master and the worker threads would be using zmq messages over the inproc transport.
What I have described are simple fan-in and fan-out patterns which are covered in the zmq guide. If you adopt this, most of the zmq code in the clients and workers would remain unchanged. You would have to write code for the master thread, but the zproxy class of CZMQ may work well for you (if you're using CZMQ).
So my advice is to get the basic design right before trying to add heartbeats. [Actually, I'm not sure how heartbeats would help your current problem.]
I have a server interacting with multiple clients where the client send messages to the server and the server reads them via recv() method. The problem I getting is that Im using waitforsingleobject(handler, 10000 millisecs) in order to make the server wait for a few seconds to interact with one client and then let others access to it but then I start seeing answer from the server with the wrong message to the client and getting blocked. So looks like a synchronization issue.
So my question is (since I'm a begginer in c++) how could I ensure that every incoming message is received and replied to the right client, allowing all the clients interact with the server.
There're two alternatives.
First is a pretty standard model - one thread per one client. When a client connects, you start a thread to handle it.
Second approach doesn't require many threads. You should use WSARecv() on an overlapped socket instead of recv(). This way, you can simultaneously open multiple receive operations, one per client, and wait them all in a WaitForMultipleObjects(). To be specific, you will wait on event inside WSAOVERLAPPED. Remember that WaitForMultipleObjects() has a limit on number of wait objects. When exceeded, you will need to run another thread. The return code from WaitForMultipleObjects() will tell you which client has sent data, so you can reply to it.
Or, as suggested above, you could probably use select() to figure out which socket has data.
I need a client networking thread to be able to respond both to new messages to be transmitted, and the receipt of new data on the network. I wish to avoid this thread performing a polling loop, but rather to process only as needed.
The scenario is as follows:
A client application needs to communicate to a server via a protocol that is largely, but not entirely, synchronous. Typically, the client sends a message to the server and blocks until a response is received.
The server may process client requests asynchronously, in which case the response to client
is not a result, but a notification that processing has begun. A result message is sent to to the client at some point in the future, when the server has finish processing the client request.
The asynchronous result notifications can arrive at the client at any time. These notifications need processed when they are received i.e. it is not possible to process a backlog only when the client transmits again.
The clients networking thread receives and processes notifications from the server, and to transmit outgoing messages from the client.
To achieve this, I need to to make a thread wake to perform processing either when network data is received OR when a message to transmit is enqueued into an input queue.
How can a thread wake to perform processing of an enqueued work item OR data from a socket?
I am interested primarily in using the plain Win32 APIs.
A minimal example or relevant tutorial would be very welcome!
An alternative to I/O Completion Ports for sockets is using WSAEventSelect to associate an event with the socket. Then as others have said, you just need to use another event (or some sort of waitable handle) to signal when an item has been added to your input queue, and use WaitForMultipleObjects to wait for either kind of event.
You can set up an I/O Completion Port for the handles and have your thread wait on the completion port:
http://technet.microsoft.com/en-us/sysinternals/bb963891.aspx
Actually, you can have multiple threads wait on the port (one thread per processor usually works well).
Following on from Michael's suggestion, I have some free code that provides a framework for IO Completion Port style socket stuff; and it includes an IOCP based work queue too. You should be able to grab some stuff from it to solve your problem from here.
Well, if both objects have standard Windows handles, you can have your client call WaitForMultipleObjects to wait on them.
You might want to investiate splitting the servicing of the network port off onto its own thread. That might simplify things greatly. However, it won't help if you just end up having to synchonize something else between that new thread and your main one.
I am writing a server in linux that is supposed to serve an API.
Initially, I wanted to make it Multi-threaded on a single port, meaning that I'd have multiple threads working on various request received on a single port.
One of my friends told me that it not the way it is supposed to work. He told me that when a request is received, I first have to follow a Handshake procedure, create a thread that is listening to some other port dedicated to the request and then redirect the requested client to the new port.
Theoretically, it's very interesting but I could not find any information on how to implement the handshake and do the redirection. Can someone help?
If I'm not wrong in interpreting your responses, once I create a multithreaded server with a main thread listening to a port, and creates a new thread to handle requests, I'm essentially making it multithreaded on a single port?
Consider the scenario where I get a large number of requests every second. Isn't it true that every request on the port should now wait for the "current" request to complete? If not, how would the communication still be done: Say a browser sends a request, so the thread handling this has to first listen to the port, block it, process it, respond and then unblock it.
By this, eventhough I'm having "multithreads" , all I'm using is one single thread at a time apart from the main thread because the port is being blocked.
What your friend told you is similar to passive FTP - a client tells the server that it needs a connection, the server sends back the port number and the client creates a data connection to that port.
But all you wanted to do is a multithreaded server. All you need is one server socket listening and accepting connections on a given port. As soon as the automatic TCP handshake is finished, you'll get a new socket from the accept function - that socket will be used for communication with the client that has just connected. So now you only have to create a new thread, passing that client socket to the thread function. In your server thread, you will then call accept again in order to accept another connection.
TCP/IP does the handshake, if you can't think of any reason to do a handshake than your application does not demand it.
An example of an application specific handshake could be for user authentication.
What your colleague is suggesting sounds like the way FTP works. This is not a good thing to do -- the internet these days is more or less used for protocols which use a single port, and having a command port is bad. One of the reasons is because statefull firewalls aren't designed for multi-port applications; they have to be extended for each individual application that does things this way.
Look at ASIO's tutorial on async TCP. There one part accept connections on TCP and spawns handlers that each communicate with a single client. That's how TCP-servers usually work (including HTTP/web, the most common tcp protocol.)
You may disregard the asynchronous stuff of ASIO if you're set on creating a thread per connection. It doesn't apply to your question. (Going fully async and have one worker-thread per core is nice, but it might not integrate well with the rest of your environment.)