Mongoose C++ Http server get only MG_OPEN_FILE event - c++

I have this server using mongoose, which take some request, parse the informations, do an action and return the result.
For exemple, I can query it this way server:port/action?arg1=test&arg2=...
My problem is that any time I query the server I get only "MG_OPEN_FILE" events. And for each request I get 3 of them.
I read that it may be normal to have some in http queries but the problem here is that I don't have any "MG_NEW_REQUEST" events.
Basically, whenever I start the server, the first connection (and all of them after) always returns the following events:
MG_OPEN_FILE
MG_OPEN_FILE
MG_OPEN_FILE
MG_REQUEST_COMPLETE
I call my server this way :
int main(int argc, char* argv[]) {
struct mg_context *ctx;
const char *options[] = {"listening_ports", "8080", "num_threads","10", NULL};
ctx = mg_start(&callback, NULL, options);
while(1){
getchar(); // Wait until user hits "enter"
}
mg_stop(ctx);
return 0;
}
And the callback function starts with :
static void *callback(enum mg_event event, struct mg_connection *conn)
{
const struct mg_request_info *request_info = mg_get_request_info(conn);
if (event == MG_NEW_REQUEST)
{
But it is always a "MG_OPEN_FILE" event and I have no clue of the reason :(
So if anyone has any idea on the reason of this, I would be extremely thankful !

When you're getting MG_OPEN_FILE, check (char *) mg_get_request_info(conn)->ev_data
It contains file name mongoose wants to open.
If you have that file in memory, return it's data and size.
If you don't, return NULL.

Is your callback returning that you procssed the event? I only return "yes" if I process the event.

Related

How to use grpc c++ ClientAsyncReader<Message> for server side streams

I am using a very simple proto where the Message contains only 1 string field. Like so:
service LongLivedConnection {
// Starts a grpc connection
rpc Connect(Connection) returns (stream Message) {}
}
message Connection{
string userId = 1;
}
message Message{
string serverMessage = 1;
}
The use case is that the client should connect to the server, and the server will use this grpc for push messages.
Now, for the client code, assuming that I am already in a worker thread, how do I properly set it up so that I can continuously receive messages that come from server at random times?
void StartConnection(const std::string& user) {
Connection request;
request.set_userId(user);
Message message;
ClientContext context;
stub_->Connect(&context, request, &reply);
// What should I do from now on?
// notify(serverMessage);
}
void notify(std::string message) {
// generate message events and pass to main event loop
}
I figured out how to used the api. Looks like it is pretty flexible, but still a little bit weird given that I typically just expect the async api to receive some kind of lambda callback.
The code below is blocking, you'll have to run this in a different thread so it doesn't block your application.
I believe you can have multiple thread accessing the CompletionQueue, but in my case I just had one single thread handling this grpc connection.
GrpcConnection.h file:
public:
void StartGrpcConnection();
private:
std::shared_ptr<grpc::Channel> m_channel;
std::unique_ptr<grpc::ClientReader<push_notifications::Message>> m_reader;
std::unique_ptr<push_notifications::PushNotificationService::Stub> m_stub;
GrpcConnection.cpp files:
...
void GrpcConnectionService::StartGrpcConnection()
{
m_channel = grpc::CreateChannel("localhost:50051",grpc::InsecureChannelCredentials());
LongLiveConnection::Connect request;
request.set_user_id(12345);
m_stub = LongLiveConnection::LongLiveConnectionService::NewStub(m_channel);
grpc::ClientContext context;
grpc::CompletionQueue cq;
std::unique_ptr<grpc::ClientAsyncReader<LongLiveConnection::Message>> reader =
m_stub->PrepareAsyncConnect(&context, request, &cq);
void* got_tag;
bool ok = false;
LongLiveConnection::Message reply;
reader->StartCall((void*)1);
cq.Next(&got_tag, &ok);
if (ok && got_tag == (void*)1)
{
// startCall() is successful if ok is true, and got_tag is void*1
// start the first read message with a different hardcoded tag
reader->Read(&reply, (void*)2);
while (true)
{
ok = false;
cq.Next(&got_tag, &ok);
if (got_tag == (void*)2)
{
// this is the message from server
std::string body = reply.server_message();
// do whatever you want with body, in my case i push it to my applications' event stream to be processed by other components
// lastly, initialize another read
reader->Read(&reply, (void*)2);
}
else if (got_tag == (void*)3)
{
// if you do something else, such as listening to GRPC channel state change, in your call, you can pass a different hardcoded tag, then, in here, you will be notified when the result is received from that call.
}
}
}
}

ZeroMQ PubSub using inproc sockets hangs forever

I'm adapting a tcp PubSub example to using inproc with multithread. It ends up hanging forever.
My setup
macOS Mojave, Xcode 10.3
zmq 4.3.2
The source code reeproducing the issue:
#include <string.h>
#include <stdio.h>
#include <unistd.h>
#include <thread>
#include "zmq.h"
void hello_pubsub_inproc() {
void* context = zmq_ctx_new();
void* publisher = zmq_socket(context, ZMQ_PUB);
printf("Starting server...\n");
int pub_conn = zmq_bind(publisher, "inproc://*:4040");
void* subscriber = zmq_socket(context, ZMQ_SUB);
printf("Collecting stock information from the server.\n");
int sub_conn = zmq_connect(subscriber, "inproc://localhost:4040");
sub_conn = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, 0, 0);
std::thread t_pub = std::thread([&]{
const char* companies[2] = {"Company1", "Company2"};
int count = 0;
for(;;) {
int which_company = count % 2;
int index = (int)strlen(companies[0]);
char update[12];
snprintf(update, sizeof update, "%s",
companies[which_company]);
zmq_msg_t message;
zmq_msg_init_size(&message, index);
memcpy(zmq_msg_data(&message), update, index);
zmq_msg_send(&message, publisher, 0);
zmq_msg_close(&message);
count++;
}
});
std::thread t_sub = std::thread([&]{
int i;
for(i = 0; i < 10; i++) {
zmq_msg_t reply;
zmq_msg_init(&reply);
zmq_msg_recv(&reply, subscriber, 0);
int length = (int)zmq_msg_size(&reply);
char* value = (char*)malloc(length);
memcpy(value, zmq_msg_data(&reply), length);
zmq_msg_close(&reply);
printf("%s\n", value);
free(value);
}
});
t_pub.join();
// Give publisher time to set up.
sleep(1);
t_sub.join();
zmq_close(subscriber);
zmq_close(publisher);
zmq_ctx_destroy(context);
}
int main (int argc, char const *argv[]) {
hello_pubsub_inproc();
return 0;
}
The result
Starting server...
Collecting stock information from the server.
I've also tried adding this before joining threads to no avail:
zmq_proxy(publisher, subscriber, NULL);
The workaround: Replacing inproc with tcp fixes it instantly. But shouldn't inproc target in-process usecases?
Quick research tells me that it couldn't have been the order of bind vs. connect, since that problem is fixed in my zmq version.
The example below somehow tells me I don't have a missing shared-context issue, because it uses none:
ZeroMQ Subscribers not receiving message from Publisher over an inproc: transport class
I read from the Guide in the section Signaling Between Threads (PAIR Sockets) that
You can use PUB for the sender and SUB for the receiver. This will correctly deliver your messages exactly as you sent them and PUB does not distribute as PUSH or DEALER do. However, you need to configure the subscriber with an empty subscription, which is annoying.
What does it mean by an empty subscription?
Where am I doing wrong?
You can use PUB for the sender and SUB for the receiver. This will correctly deliver your messages exactly as you sent them and PUB does not distribute as PUSH or DEALER do. However, you need to configure the subscriber with an empty subscription, which is annoying.
Q : What does it mean by an empty subscription?
This means to set ( configure ) a subscription, driving a Topic-list message-delivery filtering, using an empty subscription string.
Q : Where am I doing wrong?
Here :
// sub_conn = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, 0, 0); // Wrong
sub_conn = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, "",0); // Empty string
Doubts also here, about using a proper syntax and naming rules :
// int pub_conn = zmq_bind(publisher, "inproc://*:4040");
int pub_conn = zmq_bind(publisher, "inproc://<aStringWithNameMax256Chars>");
as inproc:// transport-class does not use any kind of external stack, but maps the AccessPoint's I/O(s) onto 1+ memory-locations ( a stack-less, I/O-thread not requiring transport-class ).
Given this, there is nothing like "<address>:<port#>" being interpreted by such (here missing) protocol, so the string-alike text gets used as-is for identifying which Memory-location are the message-data going to go into.
So, the "inproc://*:4040" does not get expanded, but used "literally" as a named inproc:// transport-class I/O-Memory-location identified as [*:4040] ( Next, asking a .connect()-method of .connect( "inproc://localhost:4040" ) will, and must do so, lexically miss the prepared Memory-location: ["*:4040"] as the strings do not match
So this ought fail to .connect() - error-handling might be silent, as since the versions +4.x there is not necessary to obey the historical requirement to first .bind() ( creating a "known" named-Memory-Location for inproc:// ) before one may call a .connect() to get it cross-connected with an "already existing" named-Memory-location, so the v4.0+ will most probably not raise any error on calling and creating a different .bind( "inproc://*:4040" ) landing-zone and next asking a non-matching .connect( "inproc://localhost:4040" ) ( which does not have a "previously prepared" landing-zone in an already existing named-Memory-location.

How can I read and write from a grpc stream simultaneously

I am now implementing the Raft algorithm, and I want to use gRPC stream to do this. My main idea is to create 3 streams for each node to every other peers, one stream will transmit one type of RPCs, there are AppendEntries, RequestVote and InstallSnapshot. I write some code with limited help from route_guide, because in its bidirectional stream demo RouteChat, the client send all its data before it starts to read.
Firstly, I want to write to a stream at any time, so I write the following codes
void RaftMessagesStreamClientSync::AsyncRequestVote(const RequestVoteRequest& request){
std::string peer_name = this->peer_name;
debug("GRPC: Send RequestVoteRequest from %s to %s\n", request.name().c_str(), peer_name.c_str());
request_vote_stream->Write(request);
}
Meanwhile, I want a thread keep reading from a stream, like the following codes, which is called immediately after RaftMessagesStreamClientSync is constructed.
void RaftMessagesStreamClientSync::handle_response(){
// strongThis is a must
auto strongThis = shared_from_this();
t1 = new std::thread([strongThis](){
RequestVoteResponse response;
while (strongThis->request_vote_stream->Read(&response)) {
debug("GRPC: Recv RequestVoteResponse from %s, me %s\n", response.name().c_str(), strongThis->raft_node->name.c_str());
...
}
});
...
In order to initialize 3 streams, I have to write the constructor like this, I use 3 ClientContext here because the document says one ClientContext for one RPC
struct RaftMessagesStreamClientSync : std::enable_shared_from_this<RaftMessagesStreamClientSync>{
typedef grpc::ClientReaderWriter<RequestVoteRequest, RequestVoteResponse> CR;
typedef grpc::ClientReaderWriter<AppendEntriesRequest, AppendEntriesResponse> CA;
typedef grpc::ClientReaderWriter<InstallSnapshotRequest, InstallSnapshotResponse> CI;
std::unique_ptr<CR> request_vote_stream;
std::unique_ptr<CA> append_entries_stream;
std::unique_ptr<CI> install_snapshot_stream;
ClientContext context_r;
ClientContext context_a;
ClientContext context_i;
std::thread * t1 = nullptr;
std::thread * t2 = nullptr;
std::thread * t3 = nullptr;
...
}
RaftMessagesStreamClientSync::RaftMessagesStreamClientSync(const char * addr, struct RaftNode * _raft_node) : raft_node(_raft_node), peer_name(addr) {
std::shared_ptr<Channel> channel = grpc::CreateChannel(addr, grpc::InsecureChannelCredentials());
stub = raft_messages::RaftStreamMessages::NewStub(channel);
// 1
request_vote_stream = stub->RequestVote(&context_r);
// 2
append_entries_stream = stub->AppendEntries(&context_a);
// 3
install_snapshot_stream = stub->InstallSnapshot(&context_i);
}
~RaftMessagesStreamClientSync() {
raft_node = nullptr;
t1->join();
t2->join();
t3->join();
delete t1;
delete t2;
delete t3;
}
Then I implement the server side
Status RaftMessagesStreamServiceImpl::RequestVote(ServerContext* context, ::grpc::ServerReaderWriter< ::raft_messages::RequestVoteResponse, RequestVoteRequest>* stream){
RequestVoteResponse response;
RequestVoteRequest request;
while (stream->Read(&request)) {
...
}
return Status::OK;
}
Then 2 problems happen:
When I test with 3 nodes, which actually creates 2 RaftMessagesStreamServiceImpl for each node, the statement from 1 to 3 cost a long time to execute.
There is no RPC received from server side.
There are similar problems when using Bidi Aysnc Server, However I can't figure out how this post can help me.
UPDATE
After some debugging, I found request_vote_stream->Write(request) returns 0, which, according to the document, means the stream is closed. However why is it closed?
After some debugging, I found that the two problem are all due to one problem that I create a client before I create a server.
Because I originally uses unary RPC calls, so a previous call from client only causes a gRPC error code 14. The program continues because every call sent after the server is created can be handled correctly.
However, when it comes to streaming calls, stub->RequestVote(&context_r) will end up calling a blocking function ClientReaderWriter::ClientReaderWriter, which will try to connect to the server, which is not created now.
/// Block to create a stream and write the initial metadata and \a request
/// out. Note that \a context will be used to fill in custom initial metadata
/// used to send to the server when starting the call.
ClientReaderWriter(::grpc::ChannelInterface* channel,
const ::grpc::internal::RpcMethod& method,
ClientContext* context)
: context_(context),
cq_(grpc_completion_queue_attributes{
GRPC_CQ_CURRENT_VERSION, GRPC_CQ_PLUCK,
GRPC_CQ_DEFAULT_POLLING}), // Pluckable cq
call_(channel->CreateCall(method, context, &cq_)) {
if (!context_->initial_metadata_corked_) {
::grpc::internal::CallOpSet<::grpc::internal::CallOpSendInitialMetadata>
ops;
ops.SendInitialMetadata(context->send_initial_metadata_,
context->initial_metadata_flags());
call_.PerformOps(&ops);
cq_.Pluck(&ops);
}
}
As a consequence, the connection has not yet been established.

Multithreading and parallel processes with c++

I would like to create a c++ webserver that will perform a task for each user that lands on my website. Since the task might be computationally heavy (for now just a long sleep), I'd like to handle each user on a different thread. I'm using mongoose to set up a webserver.
The different processes (in my code below just one, aka server1) are set up correctly and seem to function correctly. However, the threads seem to be queuing one after the other so if 2 users hit the end point, the second user must wait until the first user finishes. What am I missing? Do the threads run out of scope? Is there a "thread-manager" that I should be using?
#include "../../mongoose.h"
#include <unistd.h>
#include <iostream>
#include <stdlib.h>
#include <thread>
//what happens whenever someone lands on an endpoint
void myEvent(struct mg_connection *conn){
//long delay...
std::thread mythread(usleep, 2*5000000);
mythread.join();
mg_send_header(conn, "Content-Type", "text/plain");
mg_printf_data(conn, "This is a reply from server instance # %s",
(char *) conn->server_param);
}
static int ev_handler(struct mg_connection *conn, enum mg_event ev) {
if (ev == MG_REQUEST) {
myEvent(conn);
return MG_TRUE;
} else if (ev == MG_AUTH) {
return MG_TRUE;
} else {
return MG_FALSE;
}
}
static void *serve(void *server) {
for (;;) mg_poll_server((struct mg_server *) server, 1000);
return NULL;
}
int main(void) {
struct mg_server *server1;
server1 = mg_create_server((void *) "1", ev_handler);
mg_set_option(server1, "listening_port", "8080");
mg_start_thread(serve, server1);
getchar();
return 0;
}
Long running requests should be handled like this:
static void thread_func(struct mg_connection *conn) {
sleep(60); // simulate long processing
conn->user_data = "done"; // Production code must not do that.
// Other thread must never access connection
// structure directly. This example is just
// for demonstration.
}
static int ev_handler(struct mg_connection *conn, enum mg_event ev) {
switch (ev) {
case MG_REQUEST:
conn->user_data = "doing...";
spawn_thread(thread_func, conn);
return MG_MORE; // Important! Signal Mongoose we are not done yet
case MG_POLL:
if (conn->user_data != NULL && !strcmp(conn->user_data, "done")) {
mg_printf(conn, "HTTP/1.0 200 OK\n\n Done !");
return MG_TRUE; // Signal we're finished. Mongoose can close this connection
}
return MG_FALSE; // Still not done
Caveat: I'm not familiar with mongoose
My assumptions:
The serve function is polling for incoming connections
If the thread executing mg_poll_server is the same thread that triggers the call to ev_handler then your problem is the fact that ev_handler calls myEvent which starts a long running operation and blocks the thread (i.e., by calling join). In this case you're also blocking the thread which is handling the incoming connections (i.e., A subsequent client must wait for the first client to finish their work), which seems is the behavior you describe seeing.
I'm not sure what the real task is supposed to do so I can't say for sure how you should fix this. Perhaps in your use-case it may be possible to call detach otherwise you might keep track of executing threads and defer calling join on them until the server is shutdown.
James Adkison is absolutely right. So, if instead the beginning of the code looks like this:
void someFunc(struct mg_connection *conn){
usleep(2*5000000);
std::cout << "hello!" << std::endl;
std::cout<< "This finished from server instance #"<<conn<<std::endl;
mg_send_header(conn, "Content-Type", "application/json");
mg_printf_data(conn, "{\"message\": \"This is a reply from server instance # %s\"}",
// (char *) conn->server_param);
}
void myEvent(struct mg_connection *conn){
std::thread mythread(someFunc,conn);
mythread.detach();
std::cout<< "This is a reply from server instance #"<<(char *) conn->server_param<<std::endl;
}
static int ev_handler(struct mg_connection *conn, enum mg_event ev) {
if (ev == MG_REQUEST) {
myEvent(conn);
return MG_TRUE;
} else if (ev == MG_AUTH) {
//.... exactly as before
//....
then the program works. Basically the difference is replacing .join() with .detach(). someFunc is running now in parallel for 2 users -- so that's great!. Thanks!

Sending data in second thread with Mongoose server

I'm trying to create a multithread server application using mongoose web server library.
I have main thread serving connections and sending requests to processors that are working in their own threads. Then processors place results into queue and queue observer must send results back to clients.
Sources are looking that way:
Here I prepare the data for processors and place it to queue.
typedef std::pair<struct mg_connection*, const char*> TransferData;
int server_app::event_handler(struct mg_connection *conn, enum mg_event ev)
{
Request req;
if (ev == MG_AUTH)
return MG_TRUE; // Authorize all requests
else if (ev == MG_REQUEST)
{
req = parse_request(conn);
task_queue->push(TransferData(conn,req.second));
mg_printf(conn, "%s", ""); // (1)
return MG_MORE; // (2)
}
else
return MG_FALSE; // Rest of the events are not processed
}
And here I'm trying to send the result back. This function is working in it's own thread.
void server_app::check_results()
{
while(true)
{
TransferData res;
if(!res_queue->pop(res))
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(100));
continue;
}
mg_printf_data(res.first, "%s", res.second); // (3)
}
}
The problem is a client doesn't receive anything from the server.
If I run check_result function manualy in the event_handler after placing a task into the queue and then pass computed result back to event_handler, I'm able to send it to client using mg_printf_data (with returning MG_TRUE). Any other way - I'm not.
What exactly should I change in this sources to make it works?
Ok... It looks like I've solved it myself.
I'd been looking into mongoose.c code and an hour later I found the piece of code below:
static void write_terminating_chunk(struct connection *conn) {
mg_write(&conn->mg_conn, "0\r\n\r\n", 5);
}
static int call_request_handler(struct connection *conn) {
int result;
conn->mg_conn.content = conn->ns_conn->recv_iobuf.buf;
if ((result = call_user(conn, MG_REQUEST)) == MG_TRUE) {
if (conn->ns_conn->flags & MG_HEADERS_SENT) {
write_terminating_chunk(conn);
}
close_local_endpoint(conn);
}
return result;
}
So I've tried to do mg_write(&conn->mg_conn, "0\r\n\r\n", 5); after line (3) and now it's working.