gRPC polling for incoming packets from multiple sockets at once - c++

I am looking into possibility of listening on different sockets at once. To handle multiple socket connection at the same fd_set can be used in Linux. I have seen that gRPC also support this functionality with having epoll based pollset.
https://github.com/grpc/grpc/blob/18df25228cfa1f97fc5cca9176fbaef64c0e4221/doc/epoll-polling-engine.md
I intend to call different services in async mode and providing a service at the same time. Therefore, I was thinking about having a poll-set consist of client sockets waiting for async responses and server sockets. It seems to be possible in gRPC. I haven't been able to find anything in gRPC API that exposes construction of a poll-set.
Therefore, my question is how to use this capability of gRPC?
Does gRPC manages this automatically? In that case how can I wait for incoming messages?

The same CompletionQueue should be used for both client and server. To wait for the incoming messages next needs to be invokek.

Related

Boost Asio TCP Server Handling multiple clients

I am new to network programming and the usage of Boost Asio library.
I successfully implemented a task for my requirement by modifying the Boost Asio "Blocking TCP Echo Server and Client" which performs transactions of operations between my Client and Server.
Now, I have a requirement where I need to connect multiple Clients with my Server.
I found some relevant links suggesting the usage of async_accept at the Server side.
So, I tried running the Boost Asio example: "Async TCP Echo Server" with the "Blocking TCP Echo client", where the server distinguishes the different clients and addresses them accordingly.
But, my actual requirement should be like, instead of the Server completing the entire process for one Client, it [the server] has to perform same operations for the first client then go to the second client and perform those operations and then again come back to the first client and continue in this order until all operations are complete.
Is there any way or idea which could help me perform this flow using Boost Asio? Also I'm just using the "Blocking TCP Echo Client", which just has a normal connect() and not an async_connect(), now is that a problem?
Also, is it possible to communicate between multiple clients through the server using Boost Asio?
Thanking you very much in advance!
There are 2 models to handling multiple client concurrently on the server.
The one is to spawn a new thread for each client and then each thread handles each client synchronously. The second model is to use asynchronous APIs on a single thread all operating on a single service. When the accept completes, you then create a new worker thread and start the worker off the send and recv required by your protocol. You main thread goes back the accepting new connections.
With async, you prime the pump with an async accept and the call io_service run. When the accept completes, your callback runs. You now prime the pump again with further accepts (for more client) start async send and recv for the newly created client. Since all sends and recvs are non-blocking, the only time your thread sleeps is when it has nothing to do. Otherwise the io_service run method takes care of everything for you.
If you are blocking on sends and recvs, through, you cannot process more than one client concurrently.

Boost::Beast Websocket Bidirection Stream (C++)

I'm looking into using the Boost::Beast websocket library to create an asynchronous bidirectional pipe to pass data between a server and a client. I leveraged some code from the async example (I can post some at a later time if necessary, don't have access to it now). I currently have a class which creates several threads running a SocketListener. When a client connects, it creates a Session shared_ptr to do the async read and write functions. The problem is, this session object will only write out when the client has sent me a message. I'm looking for an implementation that allows my server to write on demand to all the clients connected to it and also listen for incoming data from those connections.
Is this possible? Am I using the wrong technique for this? The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket. Incoming would allow a client to drop configurations for the server and outgoing would just monitor a message queue and do a async write if a message is available.
Thanks!
Is this possible?
Yes
Am I using the wrong technique for this?
No
The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket, and No respectively.
That is not necessary, a websocket stream is full-duplex. You can read and write at the same time.
outgoing would just monitor a message queue and do a async write if a message is available.
This is the correct approach, but you can do that in the same Session object that also handles the reads.
Here's an example that reads continuously and can also write full-duplex: https://github.com/vinniefalco/CppCon2018

multi way inter process communication

There are 10 processes in my machine and each should have the capability to communicate with each other.
Now the scenario is all the 10 processes should be in the listening state so that any process can communicate with it at any time. Again when required it should be able to pass a message to any of the processes.
I am trying to code it with C++ and unix tcp/udp sockets. However I don't understand how to structure it. Shall I use UDP or TCP, which would be better? How can a process listen and send data simultaneously.
I need help.
The decision of UDP vs TCP depends on your messages, whether or not they need to be reliably delivered, etc.
For pure TCP, each peer would have a TCP socket on which each process accepts connections from other peers (and each accept would result in a new socket). This new socket is bi directional and can be used for sending / recieving from one peer to another. With this solution, you would need some sort of discovery mechanism.
For UDP, it's much the same except you don't need the accept socket. You still need some form of discovery mechanism.
The discovery mechanism could either be another peer with a well known (via configuration, etc) address, or possibly you could use UDP broadcast for the discovery mechanism.
In terms of zeroMQ, which is a slightly higher level than raw sockets, you would have a single ROUTER socket on which you're listening and recieving data, and one DEALER socket per peer on which you're sending data.
No matter the solution, you would likely need a thread for handling the network connections using poll() or something like that, and as messages are received you need another thread (or thread pool) for handling the messages.
you can run each process as severer & span 9 more thread to connect other processes as client.
This question applies to any language, so the answer is not C++ related.
When given a choice, look for a library to have an easier communication (e.g. apache-thrift).
About TCP/UDP: TCP is typically slower but more reliable, so by default, go for TCP, but there might be reasons for choosing UDP, like streaming, multicast/broadcast,... Reliability might not be an issue when all processes are on the same board, but you might want to communicate with external processes later on.
A threaded process can use the same socket for sending and receiving without locks.
Also, you need some kind of scheme to find out to what port to send to reach a process and with TCP, you need to decide whether to use static connections or connect every time you want to send.
what you want to do seems to be message passing.
before trying to build it yourself, take a look at boost mpi

C++ socket design

I am designing a client server socket program using TCP/IP.
The server listens on a certain port, the client program makes 2 connections to the server. One is for command and response and the other is for streaming of data.
For the command and response, I can use the normal blocking socket mode to receive the client command and send the server response.
For the streaming data, the server would wait for the client to send a start stream command and begins continuous sending of data to that client. The issue now is I need the handler to also listen on this connection for the stop stream command. Hence, I was thinking of making this connection non-blocking so that the receive would not block followed by a non-blocking send.
Is this method of implementing the server and client handler efficient?
Take a look at Boost::asio socket management layer. It's very well written.
http://www.boost.org/doc/libs/1_49_0/doc/html/boost_asio/tutorial/tutdaytime1.html
Yes it is very efficient.
You can use libraries like libevent.
From perspective of efficiency, the server should always be designed to use non-blocking sockets, and use event-driven asynchronous I/O architecture. Blocking sockets should be avoided at server side.
Fortunately, there've been a few mature open source frameworks you can use. Among them, libev is most lightweight.

Multithreaded Server Issue

I am writing a server in linux that is supposed to serve an API.
Initially, I wanted to make it Multi-threaded on a single port, meaning that I'd have multiple threads working on various request received on a single port.
One of my friends told me that it not the way it is supposed to work. He told me that when a request is received, I first have to follow a Handshake procedure, create a thread that is listening to some other port dedicated to the request and then redirect the requested client to the new port.
Theoretically, it's very interesting but I could not find any information on how to implement the handshake and do the redirection. Can someone help?
If I'm not wrong in interpreting your responses, once I create a multithreaded server with a main thread listening to a port, and creates a new thread to handle requests, I'm essentially making it multithreaded on a single port?
Consider the scenario where I get a large number of requests every second. Isn't it true that every request on the port should now wait for the "current" request to complete? If not, how would the communication still be done: Say a browser sends a request, so the thread handling this has to first listen to the port, block it, process it, respond and then unblock it.
By this, eventhough I'm having "multithreads" , all I'm using is one single thread at a time apart from the main thread because the port is being blocked.
What your friend told you is similar to passive FTP - a client tells the server that it needs a connection, the server sends back the port number and the client creates a data connection to that port.
But all you wanted to do is a multithreaded server. All you need is one server socket listening and accepting connections on a given port. As soon as the automatic TCP handshake is finished, you'll get a new socket from the accept function - that socket will be used for communication with the client that has just connected. So now you only have to create a new thread, passing that client socket to the thread function. In your server thread, you will then call accept again in order to accept another connection.
TCP/IP does the handshake, if you can't think of any reason to do a handshake than your application does not demand it.
An example of an application specific handshake could be for user authentication.
What your colleague is suggesting sounds like the way FTP works. This is not a good thing to do -- the internet these days is more or less used for protocols which use a single port, and having a command port is bad. One of the reasons is because statefull firewalls aren't designed for multi-port applications; they have to be extended for each individual application that does things this way.
Look at ASIO's tutorial on async TCP. There one part accept connections on TCP and spawns handlers that each communicate with a single client. That's how TCP-servers usually work (including HTTP/web, the most common tcp protocol.)
You may disregard the asynchronous stuff of ASIO if you're set on creating a thread per connection. It doesn't apply to your question. (Going fully async and have one worker-thread per core is nice, but it might not integrate well with the rest of your environment.)