I have a service written in C++ working in the background, and a client written in python but actually calls functions in C++ (using pybind11) which talks to that service. In my client example I am creating 2 processes. After the fork, the client in the new child process is able to sent requests via gRPC but not receiving the answer message back.
I read there are problems in gRPC and forking in python, but I am not sure how can this be avoided? Is creating new stub for each object in each child process supposed to work?
The flow:
I have a request from the main process - getting an object from the server via pybind+gRPC.
Then forking 2 processes and sending each one the returned object from previous service call.
In the child process, calling another request using that object - the request was sent, the answer is created in the service but I didn't get it in the client.
Related
I am following the ListFeatures() example in this tutorial: https://github.com/grpc/grpc/blob/v1.31.0/examples/cpp/route_guide/route_guide_client.cc
My server is in Java and my client application is in c++.
I have both the server and the client running locally. What I am observing is that my application crashes when I try to read the stream response via `reader->Read(&feature). I can verify that the server receives the api call and is sending responses back. I am also able to successfully hit the server from bloomRPC.
Any ideas why I can't receive responses in my c++ client application?
Much appreciated!
I had this problem when the context used to create the ClientReader fell out of scope. The context needs to be persistent while the ClientReader is in use.
I have a dynamic peer to peer network where nodes communicates with grpc. Each node has its own server and client. One grpc method is defined for login of a new Node. I use a sinchronous message to communicate the login to all others, where I create a new channel with each other servers, send a single message and wait a response.
rpc enter(LogIn) returns (Response);
If I have one node in my network (node 1) and then two or more nodes enter at the same time, for example node 2 and node 3, they'll call both the grpc method "enter" on node1's server. With this type of method, which is the behavior of node1's server? It's able to manage both this request? so with a method like this, grpc queues messages that arrive concurrently or will it handle only one request?
thanks
gRPC supports concurrent execution of multiple RPCs. When an RPC (or RPC event) arrives, it is queued on the serverBuilder.executor() specified when building the server. If you don't specify, a default executor is used that uses as many threads as necessary. Nothing in gRPC will behave differently if the RPCs are the same RPC method or different RPC methods.
I'm building a system that has 2 processes.
Process 1
This process is actually a Node.js program. This process is actually a Web Server handling the incoming request.
Process 2
This process is actually a C++ program.
Both the processes are started automatically at startup with help of rc.local
Now, for Process 1 there are some specific request that should be passed to Process 2.
For example, if Process 1 receives a post request at route /enqueue with a JSON body payload, Process 1 should stringify the JSON and pass to Process 2.
When Process 2 receives the JSON, it should kill a worker thread and start a new thread with that JSON to perform the actual task. The worker thread should be killed irrespective of whether the worker thread is still processing previous JSON
If both the processes were Node.js application, I could have forked Process 2 from Process 1 and used the following code.
process.on('message',function(message){
//implementation
}
...
process.send(data);
But my second process is a C++ app.
Any idea on how to implement it?
Note: Before flagging this question, please keep in mind I'm not looking for a full code. I just need the idea on how to do it.
You cannot use the Nodejs messaging/eventing facility for this purpose as it is specific to Node.
You will need to use the communications facilities of your operating system such as Unix, TCP, UDP sockets or an eventing system that both processes can communicate with, like Redis or ZeroMQ.
I am writing a client server program and it is my idea to have the server as simple as possible. The server will need to complete tasks such as "move this file", "delete a file" , run this complex algorithm on the columns in a file and send back results.
I have created an abstract class that will be sent between server and client called aTodo
There is one method called DoClass() in this abstract class that is to be run by the server.
Currently my server listens and waits for a connection. When it receives a connection it creates an object of type aTodo via unserialization. Then the server runs the DoClass function in that object. The server then serializes the object and sends it back to the client.
here is the code for reference:
protocolBaseServer pBase(newSockFd); //create socket
std::unique_ptr<aTodo> DoThis; //create object variable
DoThis=protocolCom::Read<aTodo>(pBase); //read the stream into the variable
DoThis->DoClass();//call the do function in the
protocolCom::Write(DoThis,pBase);//write back to the client
Is this a good way to program a server? It is VERY simple on the server.
Another way I was thinking was to create a delegate class that would be serialized and sent back and forth. The delegate would have a DoDelegate method. Then the user could put ANY function into the delegate's DoDelegate method. This would in affect allow the server to run any method in a class rather than just the Single DoClass method I have the server run now.
Thanks
It's a way good to dream...
How would you be serializing the generic delegate? That's not something that C++ facilitates...
Also, at this point your client must contain all the logic that the server implements. Because you're actually sending server classes via serialization.
I think you need to separate the command objects and the handlers that implement the corresponding actions.
Also, I would suggest that the server queues the commands received and a separate worker processes the commands from the queue.
The common way to approach this is using a framework for asynchronous socket io, such as ACE or Asio.
I'am working on a project that exposes a Web Api for Encrypting files and doing other tasks. What I want is to make the encryption task async, this is because files could be of size more than 1GB, and I donot want the client to keep waiting for the file to be encrypted. What I want is that once request for encryption is sent to the api the client is notified that your request is accepted and when it finishes a notification about success or failure is sent to the client again. Meanwhile client can do anything.
What are the best practices for this, moreover Iam working in asp.net mvc
You need to off load the encryption task to another thread in your serve. This will free up (complete) the request processing thread, and the client can continue with other stuff. You can wrap the encryption task such that after successful completion or failure, a callback is invoked. This callback must be responsible for notifying the client back.
To notify the client back, upon completion of the encryption task, you have several options, that you must code within your callback:
Email the client of the result.
If the client is a service and listens on a specific port, you can accept a callback URL in the initial encryption request, and can invoke this URL after encryption task. The assumption is that the client is running a http Service.
If there are any other integration points with the client (like filesystem, database, message oriented middleware), then use those to notify of task completion.