gRPC - C++ Async HelloWorld Client Example doesn't do anything asynchronously - c++

I am trying to learn how to use gRPC asynchronously in C++. Going over the client example at https://github.com/grpc/grpc/blob/v1.33.1/examples/cpp/helloworld/greeter_async_client.cc
Unless I am misunderstanding, I don't see anything asynchronous being demonstrated. There is one and only one RPC call, and it blocks on the main thread until the server processes it and the result is sent back.
What I need to do is create a client that can make one RPC call, and then start another while waiting for the result of the first to come back from the server.
I've got no idea how to go about that.
Does anyone have a working example, or can anyone describe how to actually use gRPC asynchronously?
Their example code:
/*
*
* Copyright 2015 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include <iostream>
#include <memory>
#include <string>
#include <grpcpp/grpcpp.h>
#include <grpc/support/log.h>
#ifdef BAZEL_BUILD
#include "examples/protos/helloworld.grpc.pb.h"
#else
#include "helloworld.grpc.pb.h"
#endif
using grpc::Channel;
using grpc::ClientAsyncResponseReader;
using grpc::ClientContext;
using grpc::CompletionQueue;
using grpc::Status;
using helloworld::HelloRequest;
using helloworld::HelloReply;
using helloworld::Greeter;
class GreeterClient {
public:
explicit GreeterClient(std::shared_ptr<Channel> channel)
: stub_(Greeter::NewStub(channel)) {}
// Assembles the client's payload, sends it and presents the response back
// from the server.
std::string SayHello(const std::string& user) {
// Data we are sending to the server.
HelloRequest request;
request.set_name(user);
// Container for the data we expect from the server.
HelloReply reply;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
ClientContext context;
// The producer-consumer queue we use to communicate asynchronously with the
// gRPC runtime.
CompletionQueue cq;
// Storage for the status of the RPC upon completion.
Status status;
// stub_->PrepareAsyncSayHello() creates an RPC object, returning
// an instance to store in "call" but does not actually start the RPC
// Because we are using the asynchronous API, we need to hold on to
// the "call" instance in order to get updates on the ongoing RPC.
std::unique_ptr<ClientAsyncResponseReader<HelloReply> > rpc(
stub_->PrepareAsyncSayHello(&context, request, &cq));
// StartCall initiates the RPC call
rpc->StartCall();
// Request that, upon completion of the RPC, "reply" be updated with the
// server's response; "status" with the indication of whether the operation
// was successful. Tag the request with the integer 1.
rpc->Finish(&reply, &status, (void*)1);
void* got_tag;
bool ok = false;
// Block until the next result is available in the completion queue "cq".
// The return value of Next should always be checked. This return value
// tells us whether there is any kind of event or the cq_ is shutting down.
GPR_ASSERT(cq.Next(&got_tag, &ok));
// Verify that the result from "cq" corresponds, by its tag, our previous
// request.
GPR_ASSERT(got_tag == (void*)1);
// ... and that the request was completed successfully. Note that "ok"
// corresponds solely to the request for updates introduced by Finish().
GPR_ASSERT(ok);
// Act upon the status of the actual RPC.
if (status.ok()) {
return reply.message();
} else {
return "RPC failed";
}
}
private:
// Out of the passed in Channel comes the stub, stored here, our view of the
// server's exposed services.
std::unique_ptr<Greeter::Stub> stub_;
};
int main(int argc, char** argv) {
// Instantiate the client. It requires a channel, out of which the actual RPCs
// are created. This channel models a connection to an endpoint (in this case,
// localhost at port 50051). We indicate that the channel isn't authenticated
// (use of InsecureChannelCredentials()).
GreeterClient greeter(grpc::CreateChannel(
"localhost:50051", grpc::InsecureChannelCredentials()));
std::string user("world");
std::string reply = greeter.SayHello(user); // The actual RPC call!
std::cout << "Greeter received: " << reply << std::endl;
return 0;
}

You are right, this is a really bad example, it blocks and not async at all.
better look at this example: grpc/greeter_async_client2.
Here you can see in the main that they send the rpc messages in a loop in async non-blocking way:
Client Async send function:
void SayHello(const std::string& user) {
// Data we are sending to the server.
HelloRequest request;
request.set_name(user);
// Call object to store rpc data
AsyncClientCall* call = new AsyncClientCall;
call->response_reader =
stub_->PrepareAsyncSayHello(&call->context, request, &cq_);
// StartCall initiates the RPC call
call->response_reader->StartCall();
call->response_reader->Finish(&call->reply, &call->status, (void*)call);
}
Client Async receive function:
// Loop while listening for completed responses.
// Prints out the response from the server.
void AsyncCompleteRpc() {
void* got_tag;
bool ok = false;
// Block until the next result is available in the completion queue "cq".
while (cq_.Next(&got_tag, &ok)) {
// The tag in this example is the memory location of the call object
AsyncClientCall* call = static_cast<AsyncClientCall*>(got_tag);
if (call->status.ok())
std::cout << "Greeter received: " << call->reply.message() << std::endl;
else
std::cout << "RPC failed" << std::endl;
// Once we're complete, deallocate the call object.
delete call;
}
}
Main function:
int main(int argc, char** argv) {
GreeterClient greeter(grpc::CreateChannel(
"localhost:50051", grpc::InsecureChannelCredentials()));
// Spawn reader thread that loops indefinitely
std::thread thread_ = std::thread(&GreeterClient::AsyncCompleteRpc, &greeter);
for (int i = 0; i < 100; i++) {
std::string user("world " + std::to_string(i));
greeter.SayHello(user); // The actual RPC call!
}
std::cout << "Press control-c to quit" << std::endl << std::endl;
thread_.join(); //blocks forever
return 0;
}
addition
As #nmgeek noted, there is a potential memory leak in this solution, please see memory-leak-in-grpc-async-client.

Related

How to use grpc c++ ClientAsyncReader<Message> for server side streams

I am using a very simple proto where the Message contains only 1 string field. Like so:
service LongLivedConnection {
// Starts a grpc connection
rpc Connect(Connection) returns (stream Message) {}
}
message Connection{
string userId = 1;
}
message Message{
string serverMessage = 1;
}
The use case is that the client should connect to the server, and the server will use this grpc for push messages.
Now, for the client code, assuming that I am already in a worker thread, how do I properly set it up so that I can continuously receive messages that come from server at random times?
void StartConnection(const std::string& user) {
Connection request;
request.set_userId(user);
Message message;
ClientContext context;
stub_->Connect(&context, request, &reply);
// What should I do from now on?
// notify(serverMessage);
}
void notify(std::string message) {
// generate message events and pass to main event loop
}
I figured out how to used the api. Looks like it is pretty flexible, but still a little bit weird given that I typically just expect the async api to receive some kind of lambda callback.
The code below is blocking, you'll have to run this in a different thread so it doesn't block your application.
I believe you can have multiple thread accessing the CompletionQueue, but in my case I just had one single thread handling this grpc connection.
GrpcConnection.h file:
public:
void StartGrpcConnection();
private:
std::shared_ptr<grpc::Channel> m_channel;
std::unique_ptr<grpc::ClientReader<push_notifications::Message>> m_reader;
std::unique_ptr<push_notifications::PushNotificationService::Stub> m_stub;
GrpcConnection.cpp files:
...
void GrpcConnectionService::StartGrpcConnection()
{
m_channel = grpc::CreateChannel("localhost:50051",grpc::InsecureChannelCredentials());
LongLiveConnection::Connect request;
request.set_user_id(12345);
m_stub = LongLiveConnection::LongLiveConnectionService::NewStub(m_channel);
grpc::ClientContext context;
grpc::CompletionQueue cq;
std::unique_ptr<grpc::ClientAsyncReader<LongLiveConnection::Message>> reader =
m_stub->PrepareAsyncConnect(&context, request, &cq);
void* got_tag;
bool ok = false;
LongLiveConnection::Message reply;
reader->StartCall((void*)1);
cq.Next(&got_tag, &ok);
if (ok && got_tag == (void*)1)
{
// startCall() is successful if ok is true, and got_tag is void*1
// start the first read message with a different hardcoded tag
reader->Read(&reply, (void*)2);
while (true)
{
ok = false;
cq.Next(&got_tag, &ok);
if (got_tag == (void*)2)
{
// this is the message from server
std::string body = reply.server_message();
// do whatever you want with body, in my case i push it to my applications' event stream to be processed by other components
// lastly, initialize another read
reader->Read(&reply, (void*)2);
}
else if (got_tag == (void*)3)
{
// if you do something else, such as listening to GRPC channel state change, in your call, you can pass a different hardcoded tag, then, in here, you will be notified when the result is received from that call.
}
}
}
}

Is this a valid way to multi thread RPC handlers in gRPC async server in C++?

I'm trying to work on the basic async gRPC C++ code as mentioned in https://github.com/grpc/grpc/tree/master/examples/cpp/helloworld
Aim is to experiment multi threading of RPC handlers. Can someone let me know that whether the below logic is correct?
Few questions,
Is the completion queue thread-safe?
I see that there is a possibility of one instance of CallData being accessed in multiple threads (returned as part of tag). Is CallData thread-safe here or do we need to have a mutex for the same?
Please note that this is an incomplete program.
class ServerImpl final {
public:
~ServerImpl() {
server_->Shutdown();
// Always shutdown the completion queue after the server.
cq_->Shutdown();
}
// There is no shutdown handling in this code.
void Run() {
std::string server_address("0.0.0.0:50051");
ServerBuilder builder;
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
// Register "service_" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *asynchronous* service.
builder.RegisterService(&service_);
// Get hold of the completion queue used for the asynchronous communication
// with the gRPC runtime.
cq_ = builder.AddCompletionQueue();
// Finally assemble the server.
server_ = builder.BuildAndStart();
std::cout << "Server listening on " << server_address << std::endl;
// Proceed to the server's main loop.
std::thread thread1(HandleRpcsHelper, 1, this);
sleep(1);
std::thread thread2(HandleRpcsHelper, 2, this);
//HandleRpcs();
thread1.join();
thread2.join();
}
private:
// Class encompasing the state and logic needed to serve a request.
class CallData {
public:
// Take in the "service" instance (in this case representing an asynchronous
// server) and the completion queue "cq" used for asynchronous communication
// with the gRPC runtime.
CallData(Greeter::AsyncService* service, ServerCompletionQueue* cq)
: service_(service), cq_(cq), responder_(&ctx_), status_(CREATE) {
// Invoke the serving logic right away.
Proceed();
}
void Proceed() {
if (status_ == CREATE) {
std::cout << "CREATE " << this << std::endl;
// Make this instance progress to the PROCESS state.
status_ = PROCESS;
// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while we process
// the one for this CallData. The instance will deallocate itself as
// part of its FINISH state.
std::cout << "PROCESS " << this << std::endl;
new CallData(service_, cq_);
// The actual processing.
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
std::cout << "SETTING TO FINISH " << this << std::endl;
responder_.Finish(reply_, Status::OK, this);
} else {
std::cout << "FINISH " << this <<std::endl;
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves (CallData).
delete this;
}
}
private:
// The means of communication with the gRPC runtime for an asynchronous
// server.
Greeter::AsyncService* service_;
// The producer-consumer queue where for asynchronous server notifications.
ServerCompletionQueue* cq_;
// Context for the rpc, allowing to tweak aspects of it such as the use
// of compression, authentication, as well as to send metadata back to the
// client.
ServerContext ctx_;
// What we get from the client.
HelloRequest request_;
// What we send back to the client.
HelloReply reply_;
// The means to get back to the client.
ServerAsyncResponseWriter<HelloReply> responder_;
// Let's implement a tiny state machine with the following states.
enum CallStatus { CREATE, PROCESS, FINISH };
CallStatus status_; // The current serving state.
};
static void HandleRpcsHelper(int id, ServerImpl *server)
{
std::cout<< "ID:" << id << std::endl;
server->HandleRpcs(id);
}
// This can be run in multiple threads if needed.
void HandleRpcs(int id) {
// Spawn a new CallData instance to serve new clients.
new CallData(&service_, cq_.get());
void* tag; // uniquely identifies a request.
bool ok;
while (true) {
// Block waiting to read the next event from the completion queue. The
// event is uniquely identified by its tag, which in this case is the
// memory address of a CallData instance.
// The return value of Next should always be checked. This return value
// tells us whether there is any kind of event or cq_ is shutting down.
std::cout << "waiting... " << id << std::endl;
GPR_ASSERT(cq_->Next(&tag, &ok));
GPR_ASSERT(ok);
std::cout << "wakeup... " << id <<std::endl;
static_cast<CallData*>(tag)->Proceed();
}
}
std::unique_ptr<ServerCompletionQueue> cq_;
Greeter::AsyncService service_;
std::unique_ptr<Server> server_;
};
int main(int argc, char** argv) {
ServerImpl server;
server.Run();
return 0;
}
First of all, you may want to read gRPC Performance Best Practices for this topic.
Is the completion queue thread-safe?
Yes
I see that there is a possibility of one instance of CallData being accessed in multiple threads (returned as part of tag). Is CallData thread-safe here or do we need to have a mutex for the same?
No because your code creates a new CallData instance for every RPC calls so it shouldn't be accessed by multiple completion queue. If you don't have instances among multiple CallData instances (i.e. static member variables), you don't need to worry about synchronization.

Close Boost Websocket from Server side, C++, tcp::acceptor accept() timeout?

UPDATE:
Well it appears that I need to address my issue with an asynchronous implementation. I will update my posting with a new direction, once I've completed testing
Original:
I'm currently writing a multiserver application that will collect, share, and request information from multiple machines. In some cases, Machine A will request information from Machine B but will need to send it to Machine C, which will reply to A. Without getting too deep into what the application is going to do I need some help with my client application.
I have my client application designed with two threads. I used this example from boost, as the basis for my design.
Thread one will open a Client Websocket with Machine-A, it will stream a series of data points and commands. Here is a stripped-down version of my code
#include "Poco/Clock.h"
#include "Poco/Task.h"
#include "Poco/Thread.h"
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <jsoncons/json.hpp>
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
class ResponseChannel : public Poco::Runnable {
void do_session(tcp::socket socket)
{
try {
websocket::stream<tcp::socket> ws{std::move(socket)};
ws.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-sync");
}));
ws.accept();
for (;;) {
beast::flat_buffer buffer;
ws.read(buffer);
if (ws.got_binary()) {
// do something
}
}
} catch (beast::system_error const& se) {
if (se.code() != websocket::error::closed) {
std::cerr << "do_session1 ->: " << se.code().message()
<< std::endl;
return;
}
} catch (std::exception const& e) {
std::cerr << "do_session2 ->: " << e.what() << std::endl;
return;
}
}
virtual void run()
{
auto const address = net::ip::make_address(host);
auto const port = static_cast<unsigned short>(respPort);
try {
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
tcp::socket socket{ioc};
for (; keep_running;) {
acceptor.accept(socket);
std::thread(&ResponseChannel::do_session, this,
std::move(socket))
.detach();
}
} catch (const std::exception& e) {
std::cout << "run: " << e.what() << std::endl;
}
}
void _terminate() { keep_running = false; }
public:
std::string host;
int respPort;
bool keep_running = true;
int responseCount = 0;
std::vector<long long int> latency_times;
long long int time_sum;
Poco::Clock* responseClock;
};
int main()
{
using namespace std::chrono_literals;
Poco::Clock clock = Poco::Clock();
Poco::Thread response_thread;
ResponseChannel response_channel;
response_channel.responseClock = &clock;
response_channel.host = "0.0.0.0";
response_channel.respPort = 8080;
response_thread.start(response_channel);
response_thread.setPriority(Poco::Thread::Priority::PRIO_HIGH);
// doing some work here. work will vary depending on command-line arguments
std::this_thread::sleep_for(30s);
response_channel.keep_running = false;
response_thread.join();
}
The way I have designed the multiple machines works as expected regarding sending commands to Machine-B and receiving results from Machine-C.
The issue I'm facing is closing out Thread 2, which contains my local response channel.
I went back and forth between Poco::Thread and Poco::Task, but I decided that I do not want to use Task, as it would be a mistake to be able to close the 2nd thread/task from the main thread. I need to know that all packets have been received before closing down the 2nd thread.
So I need to close events down only once I have received a websocket::error::closed flag from Machine-C. Shutting down the websocket, detached, thread is no issue, as when the flag arrives it takes care of that for me.
However, as part of the loop process for reconnecting after a closed socket, the thread just waits for a new connection.
acceptor.accept(socket);
It's blocking, and through the documentation, there doesn't seem to be a timeout feature. I see that there is a close option, but my attempt to use close simply threw an exception. Which ultimately added complexity, I didn't want.
Ultimately, I want the Server to continuously loop through a series of connections from both Machine-B and Machine-C, but only after my client application has ended. The last thing I do before waiting for the Poco::Thread to complete is to set the flag that I no longer want the Websocket server to run.
I've put that flag before the blocking accept() call. This would work, only with perfect timing of the flag going up, a new connection is opened and then closed, before looping back to wait for a new connection.
Ideally, there would be a timeout so that it would loop around, first checking if it timed out, allow for a periodic check if I wanted the thread to remain open.
Has anyone ever run into this?

How can I read and write from a grpc stream simultaneously

I am now implementing the Raft algorithm, and I want to use gRPC stream to do this. My main idea is to create 3 streams for each node to every other peers, one stream will transmit one type of RPCs, there are AppendEntries, RequestVote and InstallSnapshot. I write some code with limited help from route_guide, because in its bidirectional stream demo RouteChat, the client send all its data before it starts to read.
Firstly, I want to write to a stream at any time, so I write the following codes
void RaftMessagesStreamClientSync::AsyncRequestVote(const RequestVoteRequest& request){
std::string peer_name = this->peer_name;
debug("GRPC: Send RequestVoteRequest from %s to %s\n", request.name().c_str(), peer_name.c_str());
request_vote_stream->Write(request);
}
Meanwhile, I want a thread keep reading from a stream, like the following codes, which is called immediately after RaftMessagesStreamClientSync is constructed.
void RaftMessagesStreamClientSync::handle_response(){
// strongThis is a must
auto strongThis = shared_from_this();
t1 = new std::thread([strongThis](){
RequestVoteResponse response;
while (strongThis->request_vote_stream->Read(&response)) {
debug("GRPC: Recv RequestVoteResponse from %s, me %s\n", response.name().c_str(), strongThis->raft_node->name.c_str());
...
}
});
...
In order to initialize 3 streams, I have to write the constructor like this, I use 3 ClientContext here because the document says one ClientContext for one RPC
struct RaftMessagesStreamClientSync : std::enable_shared_from_this<RaftMessagesStreamClientSync>{
typedef grpc::ClientReaderWriter<RequestVoteRequest, RequestVoteResponse> CR;
typedef grpc::ClientReaderWriter<AppendEntriesRequest, AppendEntriesResponse> CA;
typedef grpc::ClientReaderWriter<InstallSnapshotRequest, InstallSnapshotResponse> CI;
std::unique_ptr<CR> request_vote_stream;
std::unique_ptr<CA> append_entries_stream;
std::unique_ptr<CI> install_snapshot_stream;
ClientContext context_r;
ClientContext context_a;
ClientContext context_i;
std::thread * t1 = nullptr;
std::thread * t2 = nullptr;
std::thread * t3 = nullptr;
...
}
RaftMessagesStreamClientSync::RaftMessagesStreamClientSync(const char * addr, struct RaftNode * _raft_node) : raft_node(_raft_node), peer_name(addr) {
std::shared_ptr<Channel> channel = grpc::CreateChannel(addr, grpc::InsecureChannelCredentials());
stub = raft_messages::RaftStreamMessages::NewStub(channel);
// 1
request_vote_stream = stub->RequestVote(&context_r);
// 2
append_entries_stream = stub->AppendEntries(&context_a);
// 3
install_snapshot_stream = stub->InstallSnapshot(&context_i);
}
~RaftMessagesStreamClientSync() {
raft_node = nullptr;
t1->join();
t2->join();
t3->join();
delete t1;
delete t2;
delete t3;
}
Then I implement the server side
Status RaftMessagesStreamServiceImpl::RequestVote(ServerContext* context, ::grpc::ServerReaderWriter< ::raft_messages::RequestVoteResponse, RequestVoteRequest>* stream){
RequestVoteResponse response;
RequestVoteRequest request;
while (stream->Read(&request)) {
...
}
return Status::OK;
}
Then 2 problems happen:
When I test with 3 nodes, which actually creates 2 RaftMessagesStreamServiceImpl for each node, the statement from 1 to 3 cost a long time to execute.
There is no RPC received from server side.
There are similar problems when using Bidi Aysnc Server, However I can't figure out how this post can help me.
UPDATE
After some debugging, I found request_vote_stream->Write(request) returns 0, which, according to the document, means the stream is closed. However why is it closed?
After some debugging, I found that the two problem are all due to one problem that I create a client before I create a server.
Because I originally uses unary RPC calls, so a previous call from client only causes a gRPC error code 14. The program continues because every call sent after the server is created can be handled correctly.
However, when it comes to streaming calls, stub->RequestVote(&context_r) will end up calling a blocking function ClientReaderWriter::ClientReaderWriter, which will try to connect to the server, which is not created now.
/// Block to create a stream and write the initial metadata and \a request
/// out. Note that \a context will be used to fill in custom initial metadata
/// used to send to the server when starting the call.
ClientReaderWriter(::grpc::ChannelInterface* channel,
const ::grpc::internal::RpcMethod& method,
ClientContext* context)
: context_(context),
cq_(grpc_completion_queue_attributes{
GRPC_CQ_CURRENT_VERSION, GRPC_CQ_PLUCK,
GRPC_CQ_DEFAULT_POLLING}), // Pluckable cq
call_(channel->CreateCall(method, context, &cq_)) {
if (!context_->initial_metadata_corked_) {
::grpc::internal::CallOpSet<::grpc::internal::CallOpSendInitialMetadata>
ops;
ops.SendInitialMetadata(context->send_initial_metadata_,
context->initial_metadata_flags());
call_.PerformOps(&ops);
cq_.Pluck(&ops);
}
}
As a consequence, the connection has not yet been established.

How do I make an in-place modification on an array using grpc and google protocol buffers?

I'm having a problem with a const request with the google protocol buffers using grpc. Here is my problem:
I would like to make an in-place modification of an array's value. For that I wrote this simple example where I try to pass an array and sum all of it's content. Here's my code:
adder.proto:
syntax = "proto3";
option java_package = "io.grpc.examples";
package adder;
// The greeter service definition.
service Adder {
// Sends a greeting
rpc Add (AdderRequest) returns (AdderReply) {}
}
// The request message containing the user's name.
message AdderRequest {
repeated int32 values = 1;
}
// The response message containing the greetings
message AdderReply {
int32 sum = 1;
}
server.cc:
//
// Created by Eric Reis on 7/6/16.
//
#include <iostream>
#include <grpc++/grpc++.h>
#include "adder.grpc.pb.h"
class AdderImpl final : public adder::Adder::Service
{
public:
grpc::Status Add(grpc::ServerContext* context, const adder::AdderRequest* request,
adder::AdderReply* reply) override
{
int sum = 0;
for(int i = 0, sz = request->values_size(); i < sz; i++)
{
request->set_values(i, 10); // -> this gives an error caused by the const declaration of the request variable
// error: "Non-const function 'set_values' is called on the const object"
sum += request->values(i); // -> this works fine
}
reply->set_sum(sum);
return grpc::Status::OK;
}
};
void RunServer()
{
std::string server_address("0.0.0.0:50051");
AdderImpl service;
grpc::ServerBuilder builder;
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
// Register "service" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *synchronous* service.
builder.RegisterService(&service);
// Finally assemble the server.
std::unique_ptr<grpc::Server> server(builder.BuildAndStart());
std::cout << "Server listening on " << server_address << std::endl;
// Wait for the server to shutdown. Note that some other thread must be
// responsible for shutting down the server for this call to ever return.
server->Wait();
}
int main(int argc, char** argv)
{
RunServer();
return 0;
}
client.cc:
//
// Created by Eric Reis on 7/6/16.
//
#include <iostream>
#include <grpc++/grpc++.h>
#include "adder.grpc.pb.h"
class AdderClient
{
public:
AdderClient(std::shared_ptr<grpc::Channel> channel) : stub_(adder::Adder::NewStub(channel)) {}
int Add(int* values, int sz) {
// Data we are sending to the server.
adder::AdderRequest request;
for (int i = 0; i < sz; i++)
{
request.add_values(values[i]);
}
// Container for the data we expect from the server.
adder::AdderReply reply;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
grpc::ClientContext context;
// The actual RPC.
grpc::Status status = stub_->Add(&context, request, &reply);
// Act upon its status.
if (status.ok())
{
return reply.sum();
}
else {
std::cout << "RPC failed" << std::endl;
return -1;
}
}
private:
std::unique_ptr<adder::Adder::Stub> stub_;
};
int main(int argc, char** argv) {
// Instantiate the client. It requires a channel, out of which the actual RPCs
// are created. This channel models a connection to an endpoint (in this case,
// localhost at port 50051). We indicate that the channel isn't authenticated
// (use of InsecureChannelCredentials()).
AdderClient adder(grpc::CreateChannel("localhost:50051",
grpc::InsecureChannelCredentials()));
int values[] = {1,2};
int sum = adder.Add(values, 2);
std::cout << "Adder received: " << sum << std::endl;
return 0;
}
My error happens when i try to call the method set_values() on the request object that is defined as const. I understand why this error is occurring but I just can't figure out a way to overcome it without making a copy of the array.
I tried to remove the const definition but the RPC calls fails when I do that.
Since I'm new to this RPC world and even more on grpc and the google protocol buffers I'd like to call for your help. What is the best way to solve this problem?
Please see my answer here. The server receives a copy of the AdderRequest sent by the client. If you were to modify it, the client's original AdderRequest would not be modified. If by "in place" you mean the server modifies the client's original memory, no RPC technology can truly accomplish that, because the client and server run in separate address spaces (processes), even on different machines.
If you truly need the server to modify the client's memory:
Ensure the server and client run on the same machine.
Use OS-specific shared-memory APIs such as shm_open() and mmap() to map the same chunk of physical memory into the address spaces of both the client and the server.
Use RPC to transmit the identifier (name) of the shared memory (not the actual data in the memory) and to invoke the server's processing.
When both client and server have opened and mapped the memory, they both have pointers (likely with different values in the different address spaces) to the same physical memory, so the server will be able to read what the client writes there (with no copying or transmitting) and vice versa.