I'm trying to setup a gRPC Async server which received a stream from the client (ServerAsyncReader usage). I'm a bit confusing with the process. I'm stuck on an assertion error.
Here it's my GreeterAsyncServer.
The assertion failed is at line GPR_ASSERT(ok); : ok is false when client sends colors (string)
void GreeterAsyncServer::HandleRpcs() {
// Spawn a new CallData instance to serve new clients.
new CallDataSendColors(&service_, cq_.get(), this);
void* tag; // uniquely identifies a request.
bool ok;
while (true) {
// Block waiting to read the next event from the completion queue. The
// event is uniquely identified by its tag, which in this case is the
// memory address of a CallData instance.
// The return value of Next should always be checked. This return value
// tells us whether there is any kind of event or cq_ is shutting down.
GPR_ASSERT(cq_->Next(&tag, &ok));
GPR_ASSERT(ok);
static_cast<CallData*>(tag)->Proceed();
}
}
void GreeterAsyncServer::Run(std::string server_address) {
ServerBuilder builder;
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
// Register "service" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *asynchronous* service.
builder.RegisterService(&service_);
// Get hold of the completion queue used for the asynchronous communication
// with the gRPC runtime.
cq_ = builder.AddCompletionQueue();
// Finally assemble the server.
server_ = builder.BuildAndStart();
std::cout << "Server listening on " << server_address << std::endl;
// Proceed to the server's main loop.
HandleRpcs();
}
Here it's my .proto file
// .proto file
// The greeting service definition.
service Greeter {
// Sends several colors to the server and receives the replies
rpc SendColors (stream ColorRequest) returns (HelloReply) {}
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
// The request message containing the color
message ColorRequest {
string color = 1;
}
Here it's my CallData class:
#pragma once
#include "CallDataT.h"
using grpc::ServerAsyncReader;
using helloworld::HelloReply;
using helloworld::ColorRequest;
class CallDataSendColors {
protected:
enum CallStatus { CREATE, PROCESS, FINISH };
CallStatus status_;
Greeter::AsyncService* service_;
ServerCompletionQueue* completionQueue_;
ServerContext serverContext_;
// What we get from the client.
ColorRequest request_;
// What we send back to the client.
HelloReply reply_;
// The means to get back to the client.
ServerAsyncReader<HelloReply, ColorRequest> reader_;
virtual void AddNextToCompletionQueue() override {
new CallDataSendColors(service_, completionQueue_, observer_);
}
virtual void WaitForRequest() override {
service_->RequestSendColors(&serverContext_, &reader_, completionQueue_, completionQueue_, this);
}
virtual void HandleRequest() override {
reader_.Read(&request_, this);
}
virtual void Proceed() override {
if (status_ == CREATE) {
status_ = PROCESS;
WaitForRequest();
}
else if (status_ == PROCESS) {
AddNextToCompletionQueue();
HandleRequest();
status_ = FINISH;
reply_.set_message(std::string("Color ") + std::string("received"));
reader_.Finish(reply_, grpc::Status::OK, this);
}
else {
// We're done! Self-destruct!
if (status_ != FINISH) {
// Log some error message
}
delete this;
}
}
public:
CallDataSendColors(Greeter::AsyncService* service, ServerCompletionQueue* completionQueue, IObserver* observer = nullptr) :
status_(CREATE),
service_(service),
completionQueue_(completionQueue),
reader_(&serverContext_) {
Proceed();
}
};
As you can see in the HandleRequest() method, the reader_.Read(&request_, this) will be called only one time. I don't know how to call it as long as there are messages commin in.
I will appreciate if someone can help me.
Thank you in advance.
I've found the solution. The method GreeterAsyncServer::HandleRpcs() must be edited link this:
while (true) {
bool gotEvent = cq_->Next(&tag, &ok);
if (false == gotEvent || true == isShuttingDown_) {
break;
}
//GPR_ASSERT(ok);
if (true == ok) {
static_cast<CallData*>(tag)->Proceed();
}
}
Related
I am using a very simple proto where the Message contains only 1 string field. Like so:
service LongLivedConnection {
// Starts a grpc connection
rpc Connect(Connection) returns (stream Message) {}
}
message Connection{
string userId = 1;
}
message Message{
string serverMessage = 1;
}
The use case is that the client should connect to the server, and the server will use this grpc for push messages.
Now, for the client code, assuming that I am already in a worker thread, how do I properly set it up so that I can continuously receive messages that come from server at random times?
void StartConnection(const std::string& user) {
Connection request;
request.set_userId(user);
Message message;
ClientContext context;
stub_->Connect(&context, request, &reply);
// What should I do from now on?
// notify(serverMessage);
}
void notify(std::string message) {
// generate message events and pass to main event loop
}
I figured out how to used the api. Looks like it is pretty flexible, but still a little bit weird given that I typically just expect the async api to receive some kind of lambda callback.
The code below is blocking, you'll have to run this in a different thread so it doesn't block your application.
I believe you can have multiple thread accessing the CompletionQueue, but in my case I just had one single thread handling this grpc connection.
GrpcConnection.h file:
public:
void StartGrpcConnection();
private:
std::shared_ptr<grpc::Channel> m_channel;
std::unique_ptr<grpc::ClientReader<push_notifications::Message>> m_reader;
std::unique_ptr<push_notifications::PushNotificationService::Stub> m_stub;
GrpcConnection.cpp files:
...
void GrpcConnectionService::StartGrpcConnection()
{
m_channel = grpc::CreateChannel("localhost:50051",grpc::InsecureChannelCredentials());
LongLiveConnection::Connect request;
request.set_user_id(12345);
m_stub = LongLiveConnection::LongLiveConnectionService::NewStub(m_channel);
grpc::ClientContext context;
grpc::CompletionQueue cq;
std::unique_ptr<grpc::ClientAsyncReader<LongLiveConnection::Message>> reader =
m_stub->PrepareAsyncConnect(&context, request, &cq);
void* got_tag;
bool ok = false;
LongLiveConnection::Message reply;
reader->StartCall((void*)1);
cq.Next(&got_tag, &ok);
if (ok && got_tag == (void*)1)
{
// startCall() is successful if ok is true, and got_tag is void*1
// start the first read message with a different hardcoded tag
reader->Read(&reply, (void*)2);
while (true)
{
ok = false;
cq.Next(&got_tag, &ok);
if (got_tag == (void*)2)
{
// this is the message from server
std::string body = reply.server_message();
// do whatever you want with body, in my case i push it to my applications' event stream to be processed by other components
// lastly, initialize another read
reader->Read(&reply, (void*)2);
}
else if (got_tag == (void*)3)
{
// if you do something else, such as listening to GRPC channel state change, in your call, you can pass a different hardcoded tag, then, in here, you will be notified when the result is received from that call.
}
}
}
}
I'm trying to work on the basic async gRPC C++ code as mentioned in https://github.com/grpc/grpc/tree/master/examples/cpp/helloworld
Aim is to experiment multi threading of RPC handlers. Can someone let me know that whether the below logic is correct?
Few questions,
Is the completion queue thread-safe?
I see that there is a possibility of one instance of CallData being accessed in multiple threads (returned as part of tag). Is CallData thread-safe here or do we need to have a mutex for the same?
Please note that this is an incomplete program.
class ServerImpl final {
public:
~ServerImpl() {
server_->Shutdown();
// Always shutdown the completion queue after the server.
cq_->Shutdown();
}
// There is no shutdown handling in this code.
void Run() {
std::string server_address("0.0.0.0:50051");
ServerBuilder builder;
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
// Register "service_" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *asynchronous* service.
builder.RegisterService(&service_);
// Get hold of the completion queue used for the asynchronous communication
// with the gRPC runtime.
cq_ = builder.AddCompletionQueue();
// Finally assemble the server.
server_ = builder.BuildAndStart();
std::cout << "Server listening on " << server_address << std::endl;
// Proceed to the server's main loop.
std::thread thread1(HandleRpcsHelper, 1, this);
sleep(1);
std::thread thread2(HandleRpcsHelper, 2, this);
//HandleRpcs();
thread1.join();
thread2.join();
}
private:
// Class encompasing the state and logic needed to serve a request.
class CallData {
public:
// Take in the "service" instance (in this case representing an asynchronous
// server) and the completion queue "cq" used for asynchronous communication
// with the gRPC runtime.
CallData(Greeter::AsyncService* service, ServerCompletionQueue* cq)
: service_(service), cq_(cq), responder_(&ctx_), status_(CREATE) {
// Invoke the serving logic right away.
Proceed();
}
void Proceed() {
if (status_ == CREATE) {
std::cout << "CREATE " << this << std::endl;
// Make this instance progress to the PROCESS state.
status_ = PROCESS;
// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while we process
// the one for this CallData. The instance will deallocate itself as
// part of its FINISH state.
std::cout << "PROCESS " << this << std::endl;
new CallData(service_, cq_);
// The actual processing.
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
std::cout << "SETTING TO FINISH " << this << std::endl;
responder_.Finish(reply_, Status::OK, this);
} else {
std::cout << "FINISH " << this <<std::endl;
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves (CallData).
delete this;
}
}
private:
// The means of communication with the gRPC runtime for an asynchronous
// server.
Greeter::AsyncService* service_;
// The producer-consumer queue where for asynchronous server notifications.
ServerCompletionQueue* cq_;
// Context for the rpc, allowing to tweak aspects of it such as the use
// of compression, authentication, as well as to send metadata back to the
// client.
ServerContext ctx_;
// What we get from the client.
HelloRequest request_;
// What we send back to the client.
HelloReply reply_;
// The means to get back to the client.
ServerAsyncResponseWriter<HelloReply> responder_;
// Let's implement a tiny state machine with the following states.
enum CallStatus { CREATE, PROCESS, FINISH };
CallStatus status_; // The current serving state.
};
static void HandleRpcsHelper(int id, ServerImpl *server)
{
std::cout<< "ID:" << id << std::endl;
server->HandleRpcs(id);
}
// This can be run in multiple threads if needed.
void HandleRpcs(int id) {
// Spawn a new CallData instance to serve new clients.
new CallData(&service_, cq_.get());
void* tag; // uniquely identifies a request.
bool ok;
while (true) {
// Block waiting to read the next event from the completion queue. The
// event is uniquely identified by its tag, which in this case is the
// memory address of a CallData instance.
// The return value of Next should always be checked. This return value
// tells us whether there is any kind of event or cq_ is shutting down.
std::cout << "waiting... " << id << std::endl;
GPR_ASSERT(cq_->Next(&tag, &ok));
GPR_ASSERT(ok);
std::cout << "wakeup... " << id <<std::endl;
static_cast<CallData*>(tag)->Proceed();
}
}
std::unique_ptr<ServerCompletionQueue> cq_;
Greeter::AsyncService service_;
std::unique_ptr<Server> server_;
};
int main(int argc, char** argv) {
ServerImpl server;
server.Run();
return 0;
}
First of all, you may want to read gRPC Performance Best Practices for this topic.
Is the completion queue thread-safe?
Yes
I see that there is a possibility of one instance of CallData being accessed in multiple threads (returned as part of tag). Is CallData thread-safe here or do we need to have a mutex for the same?
No because your code creates a new CallData instance for every RPC calls so it shouldn't be accessed by multiple completion queue. If you don't have instances among multiple CallData instances (i.e. static member variables), you don't need to worry about synchronization.
There is zero documentation how to do an async bidirectional stream with grpc. I've made guesses by piecing together the regular async examples with what I found in peope's github.
With the frankestein code I have, I cannot figure out how to tell when a client sent me a message. Here is the procedure I have running on its own thread.
void GrpcStreamingServerImpl::listeningThreadProc()
{
try
{
// I think we make a call to the RPC method and wait for others to stream to it?
::grpc::ServerContext context;
void * ourOneAndOnlyTag = reinterpret_cast<void *>(1); ///< Identifies the call we are going to make. I assume we can only handle one client
::grpc::ServerAsyncReaderWriter<mycompanynamespace::OutputMessage,
mycompanynamespace::InputMessage>
stream(&context);
m_service.RequestMessageStream(&context, &stream, m_completionQueue.get(), m_completionQueue.get(), ourOneAndOnlyTag);
// Now I'm going to loop and get events from the completion queue
bool keepGoing = false;
do
{
void* tag = nullptr;
bool ok = false;
const std::chrono::time_point<std::chrono::system_clock> deadline(std::chrono::system_clock::now() +
std::chrono::seconds(1));
grpc::CompletionQueue::NextStatus nextStatus = m_completionQueue->AsyncNext(&tag, &ok, deadline);
switch(nextStatus)
{
case grpc::CompletionQueue::NextStatus::TIMEOUT:
{
keepGoing = true;
break;
}
case grpc::CompletionQueue::NextStatus::GOT_EVENT:
{
keepGoing = true;
if(ok)
{
// This seems to get called if a client connects
// It does not get called if we didn't call 'RequestMessageStream' before the loop started
// TODO - How do we tell when the client send us a messages?
// TODO - How do we know if they are just connecting?
// TODO - How do we get the message client sent?
// The tag corresponds to the request we made
if(tag == reinterpret_cast<void *>(1))
{
// SNIP successful writing of a message
stream.Write(*(outputMessage.get()), reinterpret_cast<void*>(2));
}
else if(tag == reinterpret_cast<void *>(2))
{
// This is telling us the message we sent was completed
}
else
{
// TODO - I dunno what else it can be
}
}
break;
}
case grpc::CompletionQueue::NextStatus::SHUTDOWN:
{
keepGoing = false;
break;
}
}
} while(keepGoing);
// Completion queue was shutdown
}
catch(std::exception& e)
{
QString errorMessage(
QString("An std::exception was caught in the listening thread. Exception message: %1").arg(e.what()));
m_backPointer->onImplError(errorMessage);
}
catch(...)
{
QString errorMessage("An exception of unknown type, was caught in the listening thread.");
m_backPointer->onImplError(errorMessage);
}
}
Setup looked like this
// Start up the grpc service
grpc::ServerBuilder builder;
builder.RegisterService(&m_service);
builder.AddListeningPort(endpoint.toStdString(), grpc::InsecureServerCredentials());
m_completionQueue = builder.AddCompletionQueue();
m_server = builder.BuildAndStart();
// Start the listening thread
m_listeningThread = QThread::create(&GrpcStreamingServerImpl::listeningThreadProc, this);
I am following ASIO's async_tcp_echo_server.cpp example to write a server.
My server logic looks like this (.cpp part):
1.Server startup:
bool Server::Start()
{
mServerThread = std::thread(&Server::ServerThreadFunc, this, std::ref(ios));
//ios is asio::io_service
}
2.Init acceptor and listen for incoming connection:
void Server::ServerThreadFunc(io_service& service)
{
tcp::endpoint endp{ address::from_string(LOCAL_HOST),MY_PORT };
mAcceptor = acceptor_ptr(new tcp::acceptor{ service,endp });
// Add a job to start accepting connections.
StartAccept(*mAcceptor);
// Process event loop.Hang here till service terminated
service.run();
std::cout << "Server thread exiting." << std::endl;
}
3.Accept a connection and start reading from the client:
void Server::StartAccept(tcp::acceptor& acceptor)
{
acceptor.async_accept([&](std::error_code err, tcp::socket socket)
{
if (!err)
{
std::make_shared<Connection>(std::move(socket))->StartRead(mCounter);
StartAccept(acceptor);
}
else
{
std::cerr << "Error:" << "Failed to accept new connection" << err.message() << std::endl;
return;
}
});
}
void Connection::StartRead(uint32_t frameIndex)
{
asio::async_read(mSocket, asio::buffer(&mHeader, sizeof(XHeader)), std::bind(&Connection::ReadHandler, shared_from_this(), std::placeholders::_1, std::placeholders::_2, frameIndex));
}
So the Connection instance finally triggers ReadHandler callback where I perform actual read and write:
void Connection::ReadHandler(const asio::error_code& error, size_t bytes_transfered, uint32_t frameIndex)
{
if (bytes_transfered == sizeof(XHeader))
{
uint32_t reply;
if (mHeader.code == 12345)
{
reply = (uint32_t)12121;
size_t len = asio::write(mSocket, asio::buffer(&reply, sizeof(uint32_t)));
}
else
{
reply = (uint32_t)0;
size_t len = asio::write(mSocket, asio::buffer(&reply, sizeof(uint32_t)));
this->mSocket.shutdown(tcp::socket::shutdown_both);
return;
}
}
while (mSocket.is_open())
{
XPacket packet;
packet.dataSize = rt->buff.size();
packet.data = rt->buff.data();
std::vector<asio::const_buffer> buffers;
buffers.push_back(asio::buffer(&packet.dataSize,sizeof(uint64_t)));
buffers.push_back(asio::buffer(packet.data, packet.dataSize));
auto self(shared_from_this());
asio::async_write(mSocket, buffers,
[this, self](const asio::error_code error, size_t bytes_transfered)
{
if (error)
{
ERROR(200, "Error sending packet");
ERROR(200, error.message().c_str());
}
}
);
}
}
Now, here is the problem. The server receives data from the client and sends ,using sync asio::write, fine. But when it comes to to asio::async_read or asio::async_write inside the while loop, the method's lambda callback never gets triggered, unless I put io_context().run_one(); immediately after that. I don't understand why I see this behaviour. I do call io_service.run() right after acceptor init, so it blocks there till the server exit. The only difference of my code from the asio example, as far as I can tell, is that I run my logic from a custom thread.
Your callback isn't returning, preventing the event loop from executing other handlers.
In general, if you want an asynchronous flow, you would chain callbacks e.g. callback checks is_open(), and if true calls async_write() with itself as the callback.
In either case, the callback returns.
This allows the event loop to run, calling your callback, and so on.
In short, you should make sure your asynchronous callbacks always return in a reasonable time frame.
I got a problem with the ZeroMQ Majordomo worker API, which fails on an assertion, using this simple worker, client.
The broker I am using is all from the example section from ZeroMQ site. What's the m_reply_to used for and when is it set?
mdwrkapi.hpp:123: zmsg* mdwrk::recv(zmsg*&): Assertion `m_reply_to.size()!=0' failed.
Here is the worker code.
mdwrk session ("tcp://localhost:5555", "GenericData", verbose);
zmsg *reply = 0;
while (1) {
zmsg *request = session.recv (reply);
if (request == 0) {
break; // Worker was interrupted
}
reply = request; // Echo is complex… :-)
}
And here is the client part:
mdcli session ("tcp://localhost:5555", verbose);
int count = 1;
while(1) {
zmsg * request = new zmsg("Hello world");
zmsg * reply = session.send ("GenericData", request);
if (reply) {
delete reply;
} else {
continue; // Interrupt or failure
puts("Interupt or failure");
}
sleep(1);
puts("sleeping");
}
What's the m_reply_to used for?
As taken from the Majordomo source code, m_reply_to is declared as:
/* =====================================================================
mdwrkapi.hpp
Majordomo Protocol Worker API
Implements the MDP/Worker spec at http://rfc.zeromq.org/spec:7.
---------------------------------------------------------------------
Copyright (c) 1991-2011 iMatix Corporation <www.imatix.com>
...
*/
...
private:
...
// Return address, if any
std::string m_reply_to; // <<------------------------- RETURN ADDRESS
and serves for storing a return address like here, in recv():
// We should pop and save as many addresses as there are
// up to a null part, but for now, just save one...
m_reply_to = msg->unwrap ();
When it is set?
As taken from the source code, it may happen inside a recv():
// ---------------------------------------------------------------------
// Send reply, if any, to broker and wait for next request.
zmsg *
recv (zmsg *&reply_p)
{
// Format and send the reply if we were provided one
zmsg *reply = reply_p;
assert (reply || !m_expect_reply);
if (reply) {
assert (m_reply_to.size()!=0);
...