Concurrent grpc calls in java - concurrency

I have a dynamic peer to peer network where nodes communicates with grpc. Each node has its own server and client. One grpc method is defined for login of a new Node. I use a sinchronous message to communicate the login to all others, where I create a new channel with each other servers, send a single message and wait a response.
rpc enter(LogIn) returns (Response);
If I have one node in my network (node 1) and then two or more nodes enter at the same time, for example node 2 and node 3, they'll call both the grpc method "enter" on node1's server. With this type of method, which is the behavior of node1's server? It's able to manage both this request? so with a method like this, grpc queues messages that arrive concurrently or will it handle only one request?
thanks

gRPC supports concurrent execution of multiple RPCs. When an RPC (or RPC event) arrives, it is queued on the serverBuilder.executor() specified when building the server. If you don't specify, a default executor is used that uses as many threads as necessary. Nothing in gRPC will behave differently if the RPCs are the same RPC method or different RPC methods.

Related

gRPC service - client with multi-process

I have a service written in C++ working in the background, and a client written in python but actually calls functions in C++ (using pybind11) which talks to that service. In my client example I am creating 2 processes. After the fork, the client in the new child process is able to sent requests via gRPC but not receiving the answer message back.
I read there are problems in gRPC and forking in python, but I am not sure how can this be avoided? Is creating new stub for each object in each child process supposed to work?
The flow:
I have a request from the main process - getting an object from the server via pybind+gRPC.
Then forking 2 processes and sending each one the returned object from previous service call.
In the child process, calling another request using that object - the request was sent, the answer is created in the service but I didn't get it in the client.

Boost Asio TCP Server Handling multiple clients

I am new to network programming and the usage of Boost Asio library.
I successfully implemented a task for my requirement by modifying the Boost Asio "Blocking TCP Echo Server and Client" which performs transactions of operations between my Client and Server.
Now, I have a requirement where I need to connect multiple Clients with my Server.
I found some relevant links suggesting the usage of async_accept at the Server side.
So, I tried running the Boost Asio example: "Async TCP Echo Server" with the "Blocking TCP Echo client", where the server distinguishes the different clients and addresses them accordingly.
But, my actual requirement should be like, instead of the Server completing the entire process for one Client, it [the server] has to perform same operations for the first client then go to the second client and perform those operations and then again come back to the first client and continue in this order until all operations are complete.
Is there any way or idea which could help me perform this flow using Boost Asio? Also I'm just using the "Blocking TCP Echo Client", which just has a normal connect() and not an async_connect(), now is that a problem?
Also, is it possible to communicate between multiple clients through the server using Boost Asio?
Thanking you very much in advance!
There are 2 models to handling multiple client concurrently on the server.
The one is to spawn a new thread for each client and then each thread handles each client synchronously. The second model is to use asynchronous APIs on a single thread all operating on a single service. When the accept completes, you then create a new worker thread and start the worker off the send and recv required by your protocol. You main thread goes back the accepting new connections.
With async, you prime the pump with an async accept and the call io_service run. When the accept completes, your callback runs. You now prime the pump again with further accepts (for more client) start async send and recv for the newly created client. Since all sends and recvs are non-blocking, the only time your thread sleeps is when it has nothing to do. Otherwise the io_service run method takes care of everything for you.
If you are blocking on sends and recvs, through, you cannot process more than one client concurrently.

gRPC polling for incoming packets from multiple sockets at once

I am looking into possibility of listening on different sockets at once. To handle multiple socket connection at the same fd_set can be used in Linux. I have seen that gRPC also support this functionality with having epoll based pollset.
https://github.com/grpc/grpc/blob/18df25228cfa1f97fc5cca9176fbaef64c0e4221/doc/epoll-polling-engine.md
I intend to call different services in async mode and providing a service at the same time. Therefore, I was thinking about having a poll-set consist of client sockets waiting for async responses and server sockets. It seems to be possible in gRPC. I haven't been able to find anything in gRPC API that exposes construction of a poll-set.
Therefore, my question is how to use this capability of gRPC?
Does gRPC manages this automatically? In that case how can I wait for incoming messages?
The same CompletionQueue should be used for both client and server. To wait for the incoming messages next needs to be invokek.

Server with multiple clients and message forwarding

I have a Server running and listening on a port for incomming connections. When a Client establishes a connection it is put in a ThreadPool and set on run.
The Clients should be able to send messages between them, not only to the server, which basically forwards the received messages to another client.
I am using C++ and Qt, so when I get a new Client I put it in a QThreadPool:
Client c = new Client();
pool->start(c);
But I cannot search for specific Clients in this pool therefore I thought of storing the Clients in an List as well which can be searched.
The clients have a data structure with outgoing and incoming messages which can then be handled by iterating through the list with the clients by a server thread.
Is this a good approach or is there a better way to solve this? Is the pool even necessary any more if I store them in the List too?

Specific Akka UseCase

I have a specific use-case for an Akka implementation.
I have a set of agents who send heartbeats to Akka. Akka takes this heartbeat, and assigns actors to send them to my meta-data server (a separate server). This part is done.
Now my meta-data server also needs to send action information to the agents. However, since these agents may be behind firewalls, Akka cannot communicate to them directly so it needs to send the action as a response to the Heartbeat. Thus when the meta-data server sends an action Akka stores it in a DurableMessageQueue (separate one for each agentID) and keeps the mapping of agent-ID to DurableMessageQueue in a HashMap. Then whenever the heartbeat comes, before responding it checks this queue and piggybacks the action in the response.
The issue with this is that the HashMap will be in a single JVM and therefor I cannot scale this. Am I missing something or is there a better way to do it?
I have Akka running behind Mina server running which received and sends messages.