How to use grpc c++ ClientAsyncReader<Message> for server side streams - c++

I am using a very simple proto where the Message contains only 1 string field. Like so:
service LongLivedConnection {
// Starts a grpc connection
rpc Connect(Connection) returns (stream Message) {}
}
message Connection{
string userId = 1;
}
message Message{
string serverMessage = 1;
}
The use case is that the client should connect to the server, and the server will use this grpc for push messages.
Now, for the client code, assuming that I am already in a worker thread, how do I properly set it up so that I can continuously receive messages that come from server at random times?
void StartConnection(const std::string& user) {
Connection request;
request.set_userId(user);
Message message;
ClientContext context;
stub_->Connect(&context, request, &reply);
// What should I do from now on?
// notify(serverMessage);
}
void notify(std::string message) {
// generate message events and pass to main event loop
}

I figured out how to used the api. Looks like it is pretty flexible, but still a little bit weird given that I typically just expect the async api to receive some kind of lambda callback.
The code below is blocking, you'll have to run this in a different thread so it doesn't block your application.
I believe you can have multiple thread accessing the CompletionQueue, but in my case I just had one single thread handling this grpc connection.
GrpcConnection.h file:
public:
void StartGrpcConnection();
private:
std::shared_ptr<grpc::Channel> m_channel;
std::unique_ptr<grpc::ClientReader<push_notifications::Message>> m_reader;
std::unique_ptr<push_notifications::PushNotificationService::Stub> m_stub;
GrpcConnection.cpp files:
...
void GrpcConnectionService::StartGrpcConnection()
{
m_channel = grpc::CreateChannel("localhost:50051",grpc::InsecureChannelCredentials());
LongLiveConnection::Connect request;
request.set_user_id(12345);
m_stub = LongLiveConnection::LongLiveConnectionService::NewStub(m_channel);
grpc::ClientContext context;
grpc::CompletionQueue cq;
std::unique_ptr<grpc::ClientAsyncReader<LongLiveConnection::Message>> reader =
m_stub->PrepareAsyncConnect(&context, request, &cq);
void* got_tag;
bool ok = false;
LongLiveConnection::Message reply;
reader->StartCall((void*)1);
cq.Next(&got_tag, &ok);
if (ok && got_tag == (void*)1)
{
// startCall() is successful if ok is true, and got_tag is void*1
// start the first read message with a different hardcoded tag
reader->Read(&reply, (void*)2);
while (true)
{
ok = false;
cq.Next(&got_tag, &ok);
if (got_tag == (void*)2)
{
// this is the message from server
std::string body = reply.server_message();
// do whatever you want with body, in my case i push it to my applications' event stream to be processed by other components
// lastly, initialize another read
reader->Read(&reply, (void*)2);
}
else if (got_tag == (void*)3)
{
// if you do something else, such as listening to GRPC channel state change, in your call, you can pass a different hardcoded tag, then, in here, you will be notified when the result is received from that call.
}
}
}
}

Related

gRPC Async Server : send stream from client to server: assertion failed: ok

I'm trying to setup a gRPC Async server which received a stream from the client (ServerAsyncReader usage). I'm a bit confusing with the process. I'm stuck on an assertion error.
Here it's my GreeterAsyncServer.
The assertion failed is at line GPR_ASSERT(ok); : ok is false when client sends colors (string)
void GreeterAsyncServer::HandleRpcs() {
// Spawn a new CallData instance to serve new clients.
new CallDataSendColors(&service_, cq_.get(), this);
void* tag; // uniquely identifies a request.
bool ok;
while (true) {
// Block waiting to read the next event from the completion queue. The
// event is uniquely identified by its tag, which in this case is the
// memory address of a CallData instance.
// The return value of Next should always be checked. This return value
// tells us whether there is any kind of event or cq_ is shutting down.
GPR_ASSERT(cq_->Next(&tag, &ok));
GPR_ASSERT(ok);
static_cast<CallData*>(tag)->Proceed();
}
}
void GreeterAsyncServer::Run(std::string server_address) {
ServerBuilder builder;
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
// Register "service" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *asynchronous* service.
builder.RegisterService(&service_);
// Get hold of the completion queue used for the asynchronous communication
// with the gRPC runtime.
cq_ = builder.AddCompletionQueue();
// Finally assemble the server.
server_ = builder.BuildAndStart();
std::cout << "Server listening on " << server_address << std::endl;
// Proceed to the server's main loop.
HandleRpcs();
}
Here it's my .proto file
// .proto file
// The greeting service definition.
service Greeter {
// Sends several colors to the server and receives the replies
rpc SendColors (stream ColorRequest) returns (HelloReply) {}
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
// The request message containing the color
message ColorRequest {
string color = 1;
}
Here it's my CallData class:
#pragma once
#include "CallDataT.h"
using grpc::ServerAsyncReader;
using helloworld::HelloReply;
using helloworld::ColorRequest;
class CallDataSendColors {
protected:
enum CallStatus { CREATE, PROCESS, FINISH };
CallStatus status_;
Greeter::AsyncService* service_;
ServerCompletionQueue* completionQueue_;
ServerContext serverContext_;
// What we get from the client.
ColorRequest request_;
// What we send back to the client.
HelloReply reply_;
// The means to get back to the client.
ServerAsyncReader<HelloReply, ColorRequest> reader_;
virtual void AddNextToCompletionQueue() override {
new CallDataSendColors(service_, completionQueue_, observer_);
}
virtual void WaitForRequest() override {
service_->RequestSendColors(&serverContext_, &reader_, completionQueue_, completionQueue_, this);
}
virtual void HandleRequest() override {
reader_.Read(&request_, this);
}
virtual void Proceed() override {
if (status_ == CREATE) {
status_ = PROCESS;
WaitForRequest();
}
else if (status_ == PROCESS) {
AddNextToCompletionQueue();
HandleRequest();
status_ = FINISH;
reply_.set_message(std::string("Color ") + std::string("received"));
reader_.Finish(reply_, grpc::Status::OK, this);
}
else {
// We're done! Self-destruct!
if (status_ != FINISH) {
// Log some error message
}
delete this;
}
}
public:
CallDataSendColors(Greeter::AsyncService* service, ServerCompletionQueue* completionQueue, IObserver* observer = nullptr) :
status_(CREATE),
service_(service),
completionQueue_(completionQueue),
reader_(&serverContext_) {
Proceed();
}
};
As you can see in the HandleRequest() method, the reader_.Read(&request_, this) will be called only one time. I don't know how to call it as long as there are messages commin in.
I will appreciate if someone can help me.
Thank you in advance.
I've found the solution. The method GreeterAsyncServer::HandleRpcs() must be edited link this:
while (true) {
bool gotEvent = cq_->Next(&tag, &ok);
if (false == gotEvent || true == isShuttingDown_) {
break;
}
//GPR_ASSERT(ok);
if (true == ok) {
static_cast<CallData*>(tag)->Proceed();
}
}

Detect client context destruction from gRPC server

I have create an Async C++ gRPC server that offer several APIs similar with a signature similar to this:
service Foo {
rpc FunctionalityA(ARequest) returns (stream AResponse);
rpc FunctionalityB(BRequest) returns (stream BResponse);
}
The client creates one channel to connect to this service, and uses calls the various RPCs from separate threads, something like this:
class FooClient {
// ...
void FunctionalityA() {
auto stub = example::Foo::NewStub(m_channel);
grpc::ClientContext context;
example::ARequest request;
example::AResponse response;
auto reader = stub->FunctionalityA(&context, request);
for(int i = 0; i < 3; i++) {
reader->Read(&response);
}
}
void FunctionalityB() {
auto stub = example::Foo::NewStub(m_channel);
grpc::ClientContext context;
example::BRequest request;
example::BResponse response;
auto reader = stub->FunctionalityB(&context, request);
for(int i = 0; i < 3; i++) {
reader->Read(&response);
}
}
// ...
};
int main() {
// ...
FooClient client(grpc::CreateChannel("127.0.0.1:12345", grpc::InsecureChannelCredentials()));
auto ta = std::thread(&FooClient::FunctionalityA, &client);
auto tb = std::thread(&FooClient::FunctionalityB, &client);
// ...
}
I want to implement the server so that:
when FunctionalityA is called, it start streaming objects of type AResponse
when FunctionalityB is called, it start streaming objects of type BResponse
when the context used to call FunctionalityA is cancelled, streaming of AResponse ends
when the context used to call FunctionalityB is cancelled, streaming of BResponse ends
The problem I face is that even when the ClientContext associated with one of the two Functionalities goes out of scope (after the 3 reads in the example) the server does not receive any information and keeps writing, and the "ok" status remains true.
The "ok" status goes to false and allows me to stop Writing only when the client disconnects.
Is this the intended behavior of gRPC? Does the client need to send a specific "kiss of death" message in order to inform the server to stop writing on the stream?
Here is an example of the implementation of a Functionality server side, for completeness:
void FunctionalityB::ProcessRequest(bool ok, RequestState state) {
if(!ok) {
if(state == RequestState::START) {
// the server has been Shutdown before this particular call got matched to an incoming RPC
delete this;
} else if(state == RequestState::WRITE || state == RequestState::FINISH) {
// not going to the wire because the call is already dead (i.e., canceled, deadline expired, other side dropped the channel, etc).
delete this;
} else {
// unhandled state
}
} else {
if(state == RequestState::START) {
// the RPC has indeed been started
m_writer.Write(m_response, CreateTag(RequestState::WRITE));
// the constructor of the functionality requests a new one to handle future new connections
new FunctionalityB(m_completion_queue, m_service, m_worker);
} else if(state == RequestState::WRITE) {
// TODO do some real work
std::this_thread::sleep_for(std::chrono::milliseconds(50));
m_writer.Write(m_response, CreateTag(RequestState::WRITE)); // this write will continue forever, even after client stops reading and TryCancel its context
} else if(state == RequestState::FINISH) {
delete this;
} else {
// unhandled state
}
}
}
There are two ways to detect call cancellation on the server.
The first one is to check ServerContext::IsCancelled(). That is something you can check right before you do a write, which in this case may be fine. In the general case, though, it may not be ideal, because your application might be waiting for some other event (other than the previous write completing) before it does another write, and you ideally want some async way of getting notified when the cancellation happens.
Which brings me to the second approach, which is to request an event on the completion queue when the call is cancelled by calling ServerContext::AsyncNotifyWhenDone() before the RPC starts. This will give you async notification of the cancellation, but unfortunately, the API is very cumbersome and has a few sharp edges. (This is something that is handled much more cleanly in the new callback-based API, but that API isn't that performant in OSS until we finish the EventEngine effort.)
I hope this info is helpful.

gRPC: What are the best practices for long-running streaming?

We've implemented a Java gRPC service that runs in the cloud, with an unidirectional (client to server) streaming RPC which looks like:
rpc PushUpdates(stream Update) returns (Ack);
A C++ client (a mobile device) calls this rpc as soon as it boots up, to continuously send an update every 30 or so seconds, perpetually as long as the device is up and running.
ChannelArguments chan_args;
// this will be secure channel eventually
auto channel_p = CreateCustomChannel(remote_addr, InsecureChannelCredentials(), chan_args);
auto stub_p = DialTcc::NewStub(channel_p);
// ...
Ack ack;
auto strm_ctxt_p = make_unique<ClientContext>();
auto strm_p = stub_p->PushUpdates(strm_ctxt_p.get(), &ack);
// ...
While(true) {
// wait until we are ready to send a new update
Update updt;
// populate updt;
if(!strm_p->Write(updt)) {
// stream is not kosher, create a new one and restart
break;
}
}
Now different kinds of network interruptions happen while this is happening:
the gRPC service running in the cloud may go down (for maintenance) or may simply become unreachable.
the device's own ip address keeps changing as it is a mobile device.
We've seen that on such events, neither the channel, nor the Write() API is able to detect network disconnection reliably. At times the client keep calling Write() (which doesn't return false) but the server doesn't receive any data (wireshark doesn't show any activity at the outgoing port of the client device).
What are the best practices to recover in such cases, so that the server starts receiving the updates within X seconds from the time when such an event occurs? It is understandable that there would loss of X seconds worth data whenever such an event happens, but we want to recover reliably within X seconds.
gRPC version: 1.30.2, Client: C++-14/Linux, Sever: Java/Linux
Here's how we've hacked this. I want to check if this can be made any better or anyone from gRPC can guide me about a better solution.
The protobuf for our service looks like this. It has an RPC for pinging the service, which is used frequently to test connectivity.
// Message used in IsAlive RPC
message Empty {}
// Acknowledgement sent by the service for updates received
message UpdateAck {}
// Messages streamed to the service by the client
message Update {
...
...
}
service GrpcService {
// for checking if we're able to connect
rpc Ping(Empty) returns (Empty);
// streaming RPC for pushing updates by client
rpc PushUpdate(stream Update) returns (UpdateAck);
}
Here is how the c++ client looks, which does the following:
Connect():
Create the stub for calling the RPCs, if the stub is nullptr.
Call Ping() in regular intervals until it is successful.
On success call PushUpdate(...) RPC to create a new stream.
On failure reset the stream to nullptr.
Stream(): Do the following a while(true) loop:
Get the update to be pushed.
Call Write(...) on the stream with the update to be pushed.
If Write(...) fails for any reason break and the control goes back to Connect().
Once in every 30 minutes (or some regular interval), reset everything (stub, channel, stream) to nullptr to start afresh. This is required because at times Write(...) does not fail even if there is no connection between the client and the service. Write(...) calls are successful but the outgoing port on the client does not show any activity on wireshark!
Here is the code:
constexpr GRPC_TIMEOUT_S = 10;
constexpr RESTART_INTERVAL_M = 15;
constexpr GRPC_KEEPALIVE_TIME_MS = 10000;
string root_ca, tls_key, tls_cert; // for SSL
string remote_addr = "https://remote.com:5445";
...
...
void ResetStreaming() {
if (stub_p) {
if (strm_p) { // graceful restart/stop, this pair of API are called together, in this order
if (!strm_p->WritesDone()) {
// Log a message
}
strm_p->Finish(); // Log if return value of this is NOT grpc::OK
}
strm_p = nullptr;
strm_ctxt_p = nullptr;
stub_p = nullptr;
channel_p = nullptr;
}
}
void CreateStub() {
if (!stub_p) {
ChannelArguments chan_args;
chan_args.SetInt(GRPC_ARG_KEEPALIVE_TIME_MS, GRPC_KEEPALIVE_TIME_MS);
channel_p = CreateCustomChannel(
remote_addr,
SslCredentials(SslCredentialsOptions{root_ca, tls_key, tls_cert}),
chan_args);
stub_p = GrpcService::NewStub(m_channel_p);
}
}
void Stream() {
const auto restart_time = steady_clock::now() + minutes(RESTART_INTERVAL_M);
while (!stop) {
// restart every RESTART_INTERVAL_M (15m) even if ALL IS WELL!!
if (steady_clock::now() > restart_time) {
break;
}
Update updt = GetUpdate(); // get the update to be sent
if (!stop) {
if (channel_p->GetState(true) == GRPC_CHANNEL_SHUTDOWN ||
!strm_p->Write(updt)) {
// could not write!!
return; // we will Connect() again
}
}
}
// stopped due to stop = true or interval to create new stream has expired
ResetStreaming(); // channel, stub, stream are recreated once in every 15m
}
bool PingRemote() {
ClientContext ctxt;
ctxt.set_deadline(system_clock::now() + seconds(GRPC_TIMEOUT_S));
Empty req, resp;
CreateStub();
if (stub_p->Ping(&ctxt, req, &resp).ok()) {
static UpdateAck ack;
strm_ctxt_p = make_unique<ClientContext>(); // need new context
strm_p = stub_p->PushUpdate(strm_ctxt_p.get(), &ack);
return true;
}
if (strm_p) {
strm_p = nullptr;
strm_ctxt_p = nullptr;
}
return false;
}
void Connect() {
while (!stop) {
if (PingRemote() || stop) {
break;
}
sleep_for(seconds(5)); // wait before retrying
}
}
// set to true from another thread when we want to stop
atomic<bool> stop = false;
void StreamUntilStopped() {
if (stop) {
return;
}
strm_thread_p = make_unique<thread>([&] {
while (!stop) {
Connect();
Stream();
}
});
}
// called by the thread that sets stop = true
void Finish() {
strm_thread_p->join();
}
With this we are seeing that the streaming recovers within 15 minutes (or RESTART_INTERVAL_M) whenever there is a disruption for any reason. This code runs in a fast path, so I am curious to know if this can be made any better.

Sending data in second thread with Mongoose server

I'm trying to create a multithread server application using mongoose web server library.
I have main thread serving connections and sending requests to processors that are working in their own threads. Then processors place results into queue and queue observer must send results back to clients.
Sources are looking that way:
Here I prepare the data for processors and place it to queue.
typedef std::pair<struct mg_connection*, const char*> TransferData;
int server_app::event_handler(struct mg_connection *conn, enum mg_event ev)
{
Request req;
if (ev == MG_AUTH)
return MG_TRUE; // Authorize all requests
else if (ev == MG_REQUEST)
{
req = parse_request(conn);
task_queue->push(TransferData(conn,req.second));
mg_printf(conn, "%s", ""); // (1)
return MG_MORE; // (2)
}
else
return MG_FALSE; // Rest of the events are not processed
}
And here I'm trying to send the result back. This function is working in it's own thread.
void server_app::check_results()
{
while(true)
{
TransferData res;
if(!res_queue->pop(res))
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(100));
continue;
}
mg_printf_data(res.first, "%s", res.second); // (3)
}
}
The problem is a client doesn't receive anything from the server.
If I run check_result function manualy in the event_handler after placing a task into the queue and then pass computed result back to event_handler, I'm able to send it to client using mg_printf_data (with returning MG_TRUE). Any other way - I'm not.
What exactly should I change in this sources to make it works?
Ok... It looks like I've solved it myself.
I'd been looking into mongoose.c code and an hour later I found the piece of code below:
static void write_terminating_chunk(struct connection *conn) {
mg_write(&conn->mg_conn, "0\r\n\r\n", 5);
}
static int call_request_handler(struct connection *conn) {
int result;
conn->mg_conn.content = conn->ns_conn->recv_iobuf.buf;
if ((result = call_user(conn, MG_REQUEST)) == MG_TRUE) {
if (conn->ns_conn->flags & MG_HEADERS_SENT) {
write_terminating_chunk(conn);
}
close_local_endpoint(conn);
}
return result;
}
So I've tried to do mg_write(&conn->mg_conn, "0\r\n\r\n", 5); after line (3) and now it's working.

Wait for response from server on client

I'm trying to validate a user's login, so I send a username and password to the server, the server checks that data against the database, and will send a yes/no if the validation was a success or failure. The client receives this and the readyRead() signal is emitted, and I handle that with a slot.
I have this login function:
bool Client::login(QString username, QString password){
//some code
client.write(clientSendBuf); //send the data to the server
//wait for response
//if response is good, return true
//else return false
}
I want to wait for a response from the server before I return a true or false with login. I know how to accept a response from the server just fine, but I basically want the data to be sent, and the client program to stop until either we get a response or some time has passed and we get a time out.
How do I do this in Qt?
http://qt-project.org/doc/qt-4.8/qiodevice.html#waitForReadyRead
QTcpSocket client;
if(client.waitForReadyRead(15000)){
//do something if signal is emitted
}
else{
//timeout
}
I didn't look through the docs properly. I found my answer.
You really do not want to write code like that. Remember that all waitFor... and exec methods can reenter your code and thus are a source of hard to find bugs. No, they will reenter at the most inopportune moment. Perhaps when you're demoing to a client, or perhaps when you've shipped your first system to Elbonia :)
The client should emit a signal when the login succeeds. There's a request to login, and a response to such a request. You can use the QStateMachine to direct the overall application's logic through such responses.
The example below presumes that the network protocol supports more than one request "on the wire" at any time. It'd be simple to get rid of the handler queue and allow just one handler.
class Client : public QObject {
Q_OBJECT
typedef bool (Client::*Handler)(); // returns true when the request is finished
QTcpSocket m_client;
QByteArray m_buffer;
QQueue<Handler> m_responses; // always has the idle response on the bottom
...
Q_SLOT void hasData() {
Q_ASSERT(! m_responses.isEmpty());
m_buffer += m_client.readAll();
while (! m_buffer.isEmpty()) {
if (m_reponses.head()()) m_responses.dequeue();
}
}
bool processIdleRsp() {
// Signal an error condition, we got data but expect none!
return false; // Must never return true, since this response mustn't be dequeued.
}
bool processLoginRsp() {
const int loginRspSize = ...;
if (m_buffer.size() < loginRspSize) return false;
bool success = false;
... // process the response
emit loginRsp(success);
m_buffer = m_buffer.mid(loginRspSize); // remove our response from the buffer
return true;
}
public:
Client(QObject * parent = 0) : QObject(parent), m_state(Idle) {
connect(&m_client, SIGNAL(readyRead()), SLOT(hasData());
m_responses.enqueue(&Client::processIdleRsp);
}
Q_SLOT void loginReq(const QString & username, const QString & password) {
QByteArray request;
QDataStream req(&request, QIODevice::WriteOnly);
...
m_client.write(request);
m_responses.enqueue(&Client::processLoginRsp);
}
Q_SIGNAL void loginRsp(bool success);
};
You could use a circular queue for the buffer to speed things up if you're transmitting lots of data. As-is, the remaining data is shoved to the front of the buffer after each response is processed.