I'm trying to create a multithread server application using mongoose web server library.
I have main thread serving connections and sending requests to processors that are working in their own threads. Then processors place results into queue and queue observer must send results back to clients.
Sources are looking that way:
Here I prepare the data for processors and place it to queue.
typedef std::pair<struct mg_connection*, const char*> TransferData;
int server_app::event_handler(struct mg_connection *conn, enum mg_event ev)
{
Request req;
if (ev == MG_AUTH)
return MG_TRUE; // Authorize all requests
else if (ev == MG_REQUEST)
{
req = parse_request(conn);
task_queue->push(TransferData(conn,req.second));
mg_printf(conn, "%s", ""); // (1)
return MG_MORE; // (2)
}
else
return MG_FALSE; // Rest of the events are not processed
}
And here I'm trying to send the result back. This function is working in it's own thread.
void server_app::check_results()
{
while(true)
{
TransferData res;
if(!res_queue->pop(res))
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(100));
continue;
}
mg_printf_data(res.first, "%s", res.second); // (3)
}
}
The problem is a client doesn't receive anything from the server.
If I run check_result function manualy in the event_handler after placing a task into the queue and then pass computed result back to event_handler, I'm able to send it to client using mg_printf_data (with returning MG_TRUE). Any other way - I'm not.
What exactly should I change in this sources to make it works?
Ok... It looks like I've solved it myself.
I'd been looking into mongoose.c code and an hour later I found the piece of code below:
static void write_terminating_chunk(struct connection *conn) {
mg_write(&conn->mg_conn, "0\r\n\r\n", 5);
}
static int call_request_handler(struct connection *conn) {
int result;
conn->mg_conn.content = conn->ns_conn->recv_iobuf.buf;
if ((result = call_user(conn, MG_REQUEST)) == MG_TRUE) {
if (conn->ns_conn->flags & MG_HEADERS_SENT) {
write_terminating_chunk(conn);
}
close_local_endpoint(conn);
}
return result;
}
So I've tried to do mg_write(&conn->mg_conn, "0\r\n\r\n", 5); after line (3) and now it's working.
Related
I am using a very simple proto where the Message contains only 1 string field. Like so:
service LongLivedConnection {
// Starts a grpc connection
rpc Connect(Connection) returns (stream Message) {}
}
message Connection{
string userId = 1;
}
message Message{
string serverMessage = 1;
}
The use case is that the client should connect to the server, and the server will use this grpc for push messages.
Now, for the client code, assuming that I am already in a worker thread, how do I properly set it up so that I can continuously receive messages that come from server at random times?
void StartConnection(const std::string& user) {
Connection request;
request.set_userId(user);
Message message;
ClientContext context;
stub_->Connect(&context, request, &reply);
// What should I do from now on?
// notify(serverMessage);
}
void notify(std::string message) {
// generate message events and pass to main event loop
}
I figured out how to used the api. Looks like it is pretty flexible, but still a little bit weird given that I typically just expect the async api to receive some kind of lambda callback.
The code below is blocking, you'll have to run this in a different thread so it doesn't block your application.
I believe you can have multiple thread accessing the CompletionQueue, but in my case I just had one single thread handling this grpc connection.
GrpcConnection.h file:
public:
void StartGrpcConnection();
private:
std::shared_ptr<grpc::Channel> m_channel;
std::unique_ptr<grpc::ClientReader<push_notifications::Message>> m_reader;
std::unique_ptr<push_notifications::PushNotificationService::Stub> m_stub;
GrpcConnection.cpp files:
...
void GrpcConnectionService::StartGrpcConnection()
{
m_channel = grpc::CreateChannel("localhost:50051",grpc::InsecureChannelCredentials());
LongLiveConnection::Connect request;
request.set_user_id(12345);
m_stub = LongLiveConnection::LongLiveConnectionService::NewStub(m_channel);
grpc::ClientContext context;
grpc::CompletionQueue cq;
std::unique_ptr<grpc::ClientAsyncReader<LongLiveConnection::Message>> reader =
m_stub->PrepareAsyncConnect(&context, request, &cq);
void* got_tag;
bool ok = false;
LongLiveConnection::Message reply;
reader->StartCall((void*)1);
cq.Next(&got_tag, &ok);
if (ok && got_tag == (void*)1)
{
// startCall() is successful if ok is true, and got_tag is void*1
// start the first read message with a different hardcoded tag
reader->Read(&reply, (void*)2);
while (true)
{
ok = false;
cq.Next(&got_tag, &ok);
if (got_tag == (void*)2)
{
// this is the message from server
std::string body = reply.server_message();
// do whatever you want with body, in my case i push it to my applications' event stream to be processed by other components
// lastly, initialize another read
reader->Read(&reply, (void*)2);
}
else if (got_tag == (void*)3)
{
// if you do something else, such as listening to GRPC channel state change, in your call, you can pass a different hardcoded tag, then, in here, you will be notified when the result is received from that call.
}
}
}
}
I have create an Async C++ gRPC server that offer several APIs similar with a signature similar to this:
service Foo {
rpc FunctionalityA(ARequest) returns (stream AResponse);
rpc FunctionalityB(BRequest) returns (stream BResponse);
}
The client creates one channel to connect to this service, and uses calls the various RPCs from separate threads, something like this:
class FooClient {
// ...
void FunctionalityA() {
auto stub = example::Foo::NewStub(m_channel);
grpc::ClientContext context;
example::ARequest request;
example::AResponse response;
auto reader = stub->FunctionalityA(&context, request);
for(int i = 0; i < 3; i++) {
reader->Read(&response);
}
}
void FunctionalityB() {
auto stub = example::Foo::NewStub(m_channel);
grpc::ClientContext context;
example::BRequest request;
example::BResponse response;
auto reader = stub->FunctionalityB(&context, request);
for(int i = 0; i < 3; i++) {
reader->Read(&response);
}
}
// ...
};
int main() {
// ...
FooClient client(grpc::CreateChannel("127.0.0.1:12345", grpc::InsecureChannelCredentials()));
auto ta = std::thread(&FooClient::FunctionalityA, &client);
auto tb = std::thread(&FooClient::FunctionalityB, &client);
// ...
}
I want to implement the server so that:
when FunctionalityA is called, it start streaming objects of type AResponse
when FunctionalityB is called, it start streaming objects of type BResponse
when the context used to call FunctionalityA is cancelled, streaming of AResponse ends
when the context used to call FunctionalityB is cancelled, streaming of BResponse ends
The problem I face is that even when the ClientContext associated with one of the two Functionalities goes out of scope (after the 3 reads in the example) the server does not receive any information and keeps writing, and the "ok" status remains true.
The "ok" status goes to false and allows me to stop Writing only when the client disconnects.
Is this the intended behavior of gRPC? Does the client need to send a specific "kiss of death" message in order to inform the server to stop writing on the stream?
Here is an example of the implementation of a Functionality server side, for completeness:
void FunctionalityB::ProcessRequest(bool ok, RequestState state) {
if(!ok) {
if(state == RequestState::START) {
// the server has been Shutdown before this particular call got matched to an incoming RPC
delete this;
} else if(state == RequestState::WRITE || state == RequestState::FINISH) {
// not going to the wire because the call is already dead (i.e., canceled, deadline expired, other side dropped the channel, etc).
delete this;
} else {
// unhandled state
}
} else {
if(state == RequestState::START) {
// the RPC has indeed been started
m_writer.Write(m_response, CreateTag(RequestState::WRITE));
// the constructor of the functionality requests a new one to handle future new connections
new FunctionalityB(m_completion_queue, m_service, m_worker);
} else if(state == RequestState::WRITE) {
// TODO do some real work
std::this_thread::sleep_for(std::chrono::milliseconds(50));
m_writer.Write(m_response, CreateTag(RequestState::WRITE)); // this write will continue forever, even after client stops reading and TryCancel its context
} else if(state == RequestState::FINISH) {
delete this;
} else {
// unhandled state
}
}
}
There are two ways to detect call cancellation on the server.
The first one is to check ServerContext::IsCancelled(). That is something you can check right before you do a write, which in this case may be fine. In the general case, though, it may not be ideal, because your application might be waiting for some other event (other than the previous write completing) before it does another write, and you ideally want some async way of getting notified when the cancellation happens.
Which brings me to the second approach, which is to request an event on the completion queue when the call is cancelled by calling ServerContext::AsyncNotifyWhenDone() before the RPC starts. This will give you async notification of the cancellation, but unfortunately, the API is very cumbersome and has a few sharp edges. (This is something that is handled much more cleanly in the new callback-based API, but that API isn't that performant in OSS until we finish the EventEngine effort.)
I hope this info is helpful.
We've implemented a Java gRPC service that runs in the cloud, with an unidirectional (client to server) streaming RPC which looks like:
rpc PushUpdates(stream Update) returns (Ack);
A C++ client (a mobile device) calls this rpc as soon as it boots up, to continuously send an update every 30 or so seconds, perpetually as long as the device is up and running.
ChannelArguments chan_args;
// this will be secure channel eventually
auto channel_p = CreateCustomChannel(remote_addr, InsecureChannelCredentials(), chan_args);
auto stub_p = DialTcc::NewStub(channel_p);
// ...
Ack ack;
auto strm_ctxt_p = make_unique<ClientContext>();
auto strm_p = stub_p->PushUpdates(strm_ctxt_p.get(), &ack);
// ...
While(true) {
// wait until we are ready to send a new update
Update updt;
// populate updt;
if(!strm_p->Write(updt)) {
// stream is not kosher, create a new one and restart
break;
}
}
Now different kinds of network interruptions happen while this is happening:
the gRPC service running in the cloud may go down (for maintenance) or may simply become unreachable.
the device's own ip address keeps changing as it is a mobile device.
We've seen that on such events, neither the channel, nor the Write() API is able to detect network disconnection reliably. At times the client keep calling Write() (which doesn't return false) but the server doesn't receive any data (wireshark doesn't show any activity at the outgoing port of the client device).
What are the best practices to recover in such cases, so that the server starts receiving the updates within X seconds from the time when such an event occurs? It is understandable that there would loss of X seconds worth data whenever such an event happens, but we want to recover reliably within X seconds.
gRPC version: 1.30.2, Client: C++-14/Linux, Sever: Java/Linux
Here's how we've hacked this. I want to check if this can be made any better or anyone from gRPC can guide me about a better solution.
The protobuf for our service looks like this. It has an RPC for pinging the service, which is used frequently to test connectivity.
// Message used in IsAlive RPC
message Empty {}
// Acknowledgement sent by the service for updates received
message UpdateAck {}
// Messages streamed to the service by the client
message Update {
...
...
}
service GrpcService {
// for checking if we're able to connect
rpc Ping(Empty) returns (Empty);
// streaming RPC for pushing updates by client
rpc PushUpdate(stream Update) returns (UpdateAck);
}
Here is how the c++ client looks, which does the following:
Connect():
Create the stub for calling the RPCs, if the stub is nullptr.
Call Ping() in regular intervals until it is successful.
On success call PushUpdate(...) RPC to create a new stream.
On failure reset the stream to nullptr.
Stream(): Do the following a while(true) loop:
Get the update to be pushed.
Call Write(...) on the stream with the update to be pushed.
If Write(...) fails for any reason break and the control goes back to Connect().
Once in every 30 minutes (or some regular interval), reset everything (stub, channel, stream) to nullptr to start afresh. This is required because at times Write(...) does not fail even if there is no connection between the client and the service. Write(...) calls are successful but the outgoing port on the client does not show any activity on wireshark!
Here is the code:
constexpr GRPC_TIMEOUT_S = 10;
constexpr RESTART_INTERVAL_M = 15;
constexpr GRPC_KEEPALIVE_TIME_MS = 10000;
string root_ca, tls_key, tls_cert; // for SSL
string remote_addr = "https://remote.com:5445";
...
...
void ResetStreaming() {
if (stub_p) {
if (strm_p) { // graceful restart/stop, this pair of API are called together, in this order
if (!strm_p->WritesDone()) {
// Log a message
}
strm_p->Finish(); // Log if return value of this is NOT grpc::OK
}
strm_p = nullptr;
strm_ctxt_p = nullptr;
stub_p = nullptr;
channel_p = nullptr;
}
}
void CreateStub() {
if (!stub_p) {
ChannelArguments chan_args;
chan_args.SetInt(GRPC_ARG_KEEPALIVE_TIME_MS, GRPC_KEEPALIVE_TIME_MS);
channel_p = CreateCustomChannel(
remote_addr,
SslCredentials(SslCredentialsOptions{root_ca, tls_key, tls_cert}),
chan_args);
stub_p = GrpcService::NewStub(m_channel_p);
}
}
void Stream() {
const auto restart_time = steady_clock::now() + minutes(RESTART_INTERVAL_M);
while (!stop) {
// restart every RESTART_INTERVAL_M (15m) even if ALL IS WELL!!
if (steady_clock::now() > restart_time) {
break;
}
Update updt = GetUpdate(); // get the update to be sent
if (!stop) {
if (channel_p->GetState(true) == GRPC_CHANNEL_SHUTDOWN ||
!strm_p->Write(updt)) {
// could not write!!
return; // we will Connect() again
}
}
}
// stopped due to stop = true or interval to create new stream has expired
ResetStreaming(); // channel, stub, stream are recreated once in every 15m
}
bool PingRemote() {
ClientContext ctxt;
ctxt.set_deadline(system_clock::now() + seconds(GRPC_TIMEOUT_S));
Empty req, resp;
CreateStub();
if (stub_p->Ping(&ctxt, req, &resp).ok()) {
static UpdateAck ack;
strm_ctxt_p = make_unique<ClientContext>(); // need new context
strm_p = stub_p->PushUpdate(strm_ctxt_p.get(), &ack);
return true;
}
if (strm_p) {
strm_p = nullptr;
strm_ctxt_p = nullptr;
}
return false;
}
void Connect() {
while (!stop) {
if (PingRemote() || stop) {
break;
}
sleep_for(seconds(5)); // wait before retrying
}
}
// set to true from another thread when we want to stop
atomic<bool> stop = false;
void StreamUntilStopped() {
if (stop) {
return;
}
strm_thread_p = make_unique<thread>([&] {
while (!stop) {
Connect();
Stream();
}
});
}
// called by the thread that sets stop = true
void Finish() {
strm_thread_p->join();
}
With this we are seeing that the streaming recovers within 15 minutes (or RESTART_INTERVAL_M) whenever there is a disruption for any reason. This code runs in a fast path, so I am curious to know if this can be made any better.
I would like to create a c++ webserver that will perform a task for each user that lands on my website. Since the task might be computationally heavy (for now just a long sleep), I'd like to handle each user on a different thread. I'm using mongoose to set up a webserver.
The different processes (in my code below just one, aka server1) are set up correctly and seem to function correctly. However, the threads seem to be queuing one after the other so if 2 users hit the end point, the second user must wait until the first user finishes. What am I missing? Do the threads run out of scope? Is there a "thread-manager" that I should be using?
#include "../../mongoose.h"
#include <unistd.h>
#include <iostream>
#include <stdlib.h>
#include <thread>
//what happens whenever someone lands on an endpoint
void myEvent(struct mg_connection *conn){
//long delay...
std::thread mythread(usleep, 2*5000000);
mythread.join();
mg_send_header(conn, "Content-Type", "text/plain");
mg_printf_data(conn, "This is a reply from server instance # %s",
(char *) conn->server_param);
}
static int ev_handler(struct mg_connection *conn, enum mg_event ev) {
if (ev == MG_REQUEST) {
myEvent(conn);
return MG_TRUE;
} else if (ev == MG_AUTH) {
return MG_TRUE;
} else {
return MG_FALSE;
}
}
static void *serve(void *server) {
for (;;) mg_poll_server((struct mg_server *) server, 1000);
return NULL;
}
int main(void) {
struct mg_server *server1;
server1 = mg_create_server((void *) "1", ev_handler);
mg_set_option(server1, "listening_port", "8080");
mg_start_thread(serve, server1);
getchar();
return 0;
}
Long running requests should be handled like this:
static void thread_func(struct mg_connection *conn) {
sleep(60); // simulate long processing
conn->user_data = "done"; // Production code must not do that.
// Other thread must never access connection
// structure directly. This example is just
// for demonstration.
}
static int ev_handler(struct mg_connection *conn, enum mg_event ev) {
switch (ev) {
case MG_REQUEST:
conn->user_data = "doing...";
spawn_thread(thread_func, conn);
return MG_MORE; // Important! Signal Mongoose we are not done yet
case MG_POLL:
if (conn->user_data != NULL && !strcmp(conn->user_data, "done")) {
mg_printf(conn, "HTTP/1.0 200 OK\n\n Done !");
return MG_TRUE; // Signal we're finished. Mongoose can close this connection
}
return MG_FALSE; // Still not done
Caveat: I'm not familiar with mongoose
My assumptions:
The serve function is polling for incoming connections
If the thread executing mg_poll_server is the same thread that triggers the call to ev_handler then your problem is the fact that ev_handler calls myEvent which starts a long running operation and blocks the thread (i.e., by calling join). In this case you're also blocking the thread which is handling the incoming connections (i.e., A subsequent client must wait for the first client to finish their work), which seems is the behavior you describe seeing.
I'm not sure what the real task is supposed to do so I can't say for sure how you should fix this. Perhaps in your use-case it may be possible to call detach otherwise you might keep track of executing threads and defer calling join on them until the server is shutdown.
James Adkison is absolutely right. So, if instead the beginning of the code looks like this:
void someFunc(struct mg_connection *conn){
usleep(2*5000000);
std::cout << "hello!" << std::endl;
std::cout<< "This finished from server instance #"<<conn<<std::endl;
mg_send_header(conn, "Content-Type", "application/json");
mg_printf_data(conn, "{\"message\": \"This is a reply from server instance # %s\"}",
// (char *) conn->server_param);
}
void myEvent(struct mg_connection *conn){
std::thread mythread(someFunc,conn);
mythread.detach();
std::cout<< "This is a reply from server instance #"<<(char *) conn->server_param<<std::endl;
}
static int ev_handler(struct mg_connection *conn, enum mg_event ev) {
if (ev == MG_REQUEST) {
myEvent(conn);
return MG_TRUE;
} else if (ev == MG_AUTH) {
//.... exactly as before
//....
then the program works. Basically the difference is replacing .join() with .detach(). someFunc is running now in parallel for 2 users -- so that's great!. Thanks!
I wrote two programs, one as server and another as client. The server is written in standard C++ using WinSock2.h. It is a simple echo server which means the server responds what it receives back to the client. I used a new thread for every client's connection, and in each thread:
Socket* s = (Socket*) a;
while (1) {
std::string r = s->ReceiveLine()
if (r.empty()) {
break;
}
s->SendLine(r);
}
delete s;
return 0;
Socket is a class from here. The server runs properly and I've tested it using telnet, it works well.
Then I wrote the client using C++.NET (or C++/CLI), TcpClient is used to send and receive message from the server. The code is like:
String^ request = "test";
TcpClient ^ client = gcnew TcpClient(server, port);
array<Byte> ^ data = Encoding::ASCII->GetBytes(request);
NetworkStream ^ stream = client->GetStream();
stream->Write(data, 0, data->Length);
data = gcnew array<Byte>(256);
String ^ response = String::Empty;
int bytes = stream->Read(data, 0, data->Length);
response = Encoding::ASCII->GetString(data, 0, bytes);
client->Close();
When I run the client and tries to show the response message onto my form, the program halted at the line int bytes = stream->Read(data, 0, data->Length); and cannot fetch the response. The server is running and there's nothing to do with the network as they are all running on the same computer.
I guess the reason is that the data server responds with is less than data->Length, so the Read method is waiting for more data. Is that right? How should I solve this problem?
Edit
I think I've solved the problem... There are another two methods in the Socket class, ReceiveBytes and SendBytes, and these two methods will not delete the unused space in the bytes array. So the length of data back from the server will match that from the client, thus the Read method will not wait for more data to come.
std::string Socket::ReceiveLine() {
std::string ret;
while (1) {
char r;
switch(recv(s_, &r, 1, 0)) {
case 0: // not connected anymore;
// ... but last line sent
// might not end in \n,
// so return ret anyway.
return ret;
case -1:
return "";
// if (errno == EAGAIN) {
// return ret;
// } else {
// // not connected anymore
// return "";
// }
}
ret += r;
if (r == '\n') return ret;
}
}
i guess receiveline function of the server is waiting for an enter key '\n'.
just try with "test\n" string.
String^ request = "test\n";
// other codes....